Live from SAPinsider Studio: Carsten Hilker on Universal Journal

Live from SAPinsider Studio: Carsten Hilker on Universal Journal

Carsten Hilker, SAP, Finance Solution Management, joins SAPinsider Studio at the 2016 FIN-GRC event to discuss SAP S/4HANA and the Universal Journal.

This is an edited version of the transcript:

Ken Murphy, SAPinsider: Hi this is Ken Murphy with SAPinsider. I am at the SAPinsider Financials 2016 in Las Vegas. Today I’m pleased to be joined by Carsten Hilker, Finance Solution Management team, SAP, and he is here today to talk to us a little bit about the Universal Journal. Carsten, thanks for being with us.

Carsten Hilker, SAP: You’re welcome. Thank you for having me.

Ken: Can you summarize the importance of the Universal Journal architecture framework within that SAP S/4HANA framework, and how significant a change is that or advancement is that for financial accounting, or account reconciliations?

Carsten: It’s groundbreaking. For the last 30 years basically information systems were built on the principle and design principles of relational database design. So when we store data we try to store it in as many tables as possible to spread the data load and if you had a report we had only to access a few of them. So it was really complex because you had to understand the data model and then you tried to get information out of it you had to bring data from different sources together. With today’s technology on the hardware side in-memory databases are so fast that we don’t have to have that database design anymore. So what we basically did is instead of storing our financial information over 30, 40, or 50 different tables we kind of tried to take all the data in and put it together in one line-item table. So all the financial transactions and all the line items are now in one table, over 350 fields wide. There is no spread out anymore into all of these tables which means when we update we just insert a new line at the very end. There’s no blocking, there’s no locking. And when we report we can report basically out of that same table; we don’t have to create joins or extracts of this information – we can directly consume that information.

That’s groundbreaking also for the finance department because when they work with financial information they often have to work with information across these different tables. So if you wanted to have an idea of revenue from a financial perspective in your balance sheet, and say break it down to me by product and by customer, you had to go maybe into a profitability table. You had to transfer your selection criteria 1-to-1 to start with the same number on the top hopefully and then drill down. Having everything in one table you start a balance sheet and say show me revenue by customer, by product, by whatever dimension. If you want to see the breakdown by inventory down to the plant and material and storage location, if you want to see production variances maybe by the production lot and the product you produce – it’s all in this one table so far less extract on the finance side. No more Excel gymnastics, no more VLOOKUP, macros creating basically reports. You can go through this Universal Journal and that’s where your information resides. So it’s a big change from an architectural perspective it has a big, big impact on the finance of departments consuming that information.

Ken: Excel gymnastics, I like that. So how does it provide that simpler data model and how does that model lead to that single source of truth?

Carsten: It’s actually really interesting because when you look at our data model it grew over the last 20 years. So it’s an industry strength data model in Finance that we’ve had basically that we inherited. And what our development team basically did is they said let’s create this new table which basically is a concatenation of all of these tables. When people say we have a new data model, we don’t have a new data model – we still have the old data model in place but we added this new Universal Journal which is a concatenation of all of these tables. So when I have an entry today that goes to FI to the BSEG table and CO at the same time, COEP, now it doesn’t go into these underlying tables anymore – it goes directly and provides in those fields the information that we would have posted to FI and CO. So basically we keep the original infrastructure in place but we also store that information in that Universal Journal that is basically derived of all the fields that I had before. What’s really interesting is – and we couldn’t do this in the past – if you have an approach like this that has all the fields that are in Finance then often a posting might only fill 10, 20, or 30 fields.  In the past that was bad, because even that would take space. Today, it does not matter anymore. So, bottom line is that we added this table, this construct of the Universal Journal and all information gets fed in there instead of the underlying tables. There’s still some of the tables there, and they’ll eventually be phased out over time. But that’s the enablement part. And what’s really important is rather than ripping out the data model structure, and putting something new in we kept the structure in place, we added something to it and what we added is something we call semantic consistency. So if a field was in BSEG called company code, it’s in the Universal Journal called company code. If it was called profit center, it’s called profit center. And what that means from a consumption perspective the front-ends now can basically also consume information in here. We didn’t have to re-write our interfaces, we didn’t have to re-write our applications around it because the report still says give me company code and account by period. And thereby we were able to introduce this new concept without basically making it disruptive to the organization.

Ken: So how then does the Universal Journal allow the Finance team to attack and re-design month-end processes?

Carsten: So, I think two-fold. Because in-memory databases are substantially faster, up to a million times in theory depending on how well the code and everything is built out, it means that jobs that predominantly were run as batch jobs, they were long-running, they ran overnight, maybe so long running that they had to be month-end only, once, can now run in seconds or minutes. And now basically organizations have the flexibility to rather than their workloads and work schedule being built on or around batch schedules, they make them event-driven. I can for example work on inter-company reconciliation anytime I want. If it takes me a couple of minutes; so the new process would be that I run this process every day up to my event, which means at month-end there’s far less left. And I make it a daily process or a whenever I want process. Much more event-driven, either because it’s on my schedule, I want to, I think it’s a good idea – and so what we’re doing is basically especially for month-end taking the barrier away and saying that (instead of) overnight, eight hours running (it is now) any time when you want especially like early on in the month. So we clutter it, which means we do a lot of work up front and far less at the end, which makes me take time and stress out of the close. So that’s one thing is the processes themselves. And part of month-end closing always is data validation, alignment, reconciliation. So that’s really kind of running reports, running analysis, doing some line-item reviews and with each of these steps basically because we now can consume the data directly from the Universal Journal it takes no time at all. I have an instant response time much like our communication is, right? It’s instant. I don’t have to schedule jobs or run jobs or wait for batch windows where I have data replicated into a BW system every four hours and I need to wait. So if I make an adjustment posting today, I can go directly in Excel, see the change and if I’m good with it then I’m good. If I need to make another posting I’ll make it, go to Excel, and close my steps. Also far less waves at month end to load data and review data.

Ken: When moving to SAP S/4HANA Finance, what are the steps needed to migrate to the Universal Journal? Does the business get involved with that?

Carsten: The end point would be the S/4HANA system. The difference is basically the starting point. So let’s say you have a single system and you don’t have a traditional ERP system. It’s a two-step process that you can run together. The first one is basically a technical migration of the database where we’re basically loading data, the fields from the traditional database out, and load it into HANA. So it’s much like a disaster recovery process because we don’t do anything to the data model we just reel it out and reel it back in. And then basically the ERP system runs on HANA. So that’s the technical database migration step. At SAP we’ve done this tens of thousands of times over the last 20, 30 years. The second step then is basically the functional upgrade, taking the existing ERP functionality and put S/4HANA Finance, the Simple Finance scope, that includes the Universal Journal on top of it. And that’s much like a traditional enhancement pack; there are a couple of data dictionary changes, a couple of new changes to programs, to user interfaces, and that’s basically much like an EhP that you apply. And then you have basically the features of S/4HANA Finance like Universal Journal. If you happen to be a customer that has a heterogeneous system landscape all in one ERP system maybe multiple non-SAP systems, obviously it would take quite a bit of time to upgrade each of these or transition them to SAP. And for those customers we have a concept in place called Central Finance where we take a brand new S/4HANA Finance instance, we stand it up and we replicate the financial data out of these source systems, SAP or non, into the Central Finance instance. It’s something that can be done without any disruption to the organization because you’re just copying data and it can be done in as little as three months depending on how much process scope you want to set up. And then you also have access to the Universal Journal, and you actually have access to the Universal Journal not only for customers that have SAP systems but customers that have in their source systems maybe Oracle, JD Edwards, or PeopleSoft. They can obviously then lock into Central Finance and use the Excellis front and the Universal Journal as well. So it really depends on the customer’s situation, but basically it’s a database upgrade and a functional upgrade.

Ken: How does a company determine what data to record and report on and how does that change if you do have redesigned business processes?

Carsten: The Universal Journal at this point in time focus on actuals, right? So basically all the actuals that you recorded come into the Universal Journal. What we have as a positive change is a couple of things. First of all, maybe you bring data in and data gets posted like a sales order leads to revenue or a production order leads to expense, these kinds of things. We can provide more information because the Universal Journal coding block can be enhanced. It’s already larger than it was in the old systems. I can bring in fields from the sales orders that were never able to be stored in BSEG. So I can bring in more data than before. We can also set up derivations in that system. So for example if you happen to use for a project system and you know a charge goes to the project but eventually it’s also going to end up in a market unit or customer, you can set up additional derivations so that these postings are tagged the first time they hit the system with more dimensionality. So that’s how we change the recording. The important part for reporting is now that the Universal Journal is not only for us to record all financial transactions, but we also report out of it. So for the business user there’s no extract into a BusinessObjects universe so you can run Explorer on top of it, or an extract into an Info object or something like this, on a BW so you can do BEx reporting, they can use those tools and directly consume this information which will rapidly change the way how they can basically consume it. And for us what’s really important is while we do this for actuals today, the table used to store the data is called ACDOCA for actuals. ACDOCP for planning is already in the works right now and it’s forthcoming. And that will bring the actual and the planning together all in one single source of truth.

Ken: Carsten thank you for joining us today.

Carsten: You’re very welcome, thanks for having me.

More Resources

See All Related Content