Optimizing SAP BW on SAP HANA

Optimizing SAP BW on SAP HANA

Live Q&A with HANA 2017 speaker Dirk Morgenroth

Published: 01/May/2017

Reading time: 22 mins

After technically migrating to SAP BW on SAP HANA, leveraging capabilities of your SAP BW on SAP HANA system that are pushed down to the SAP HANA database and do not run on the application server is key to a successful optimization road map — and it is a challenging task.

In this live Q&A, HANA 2017 speaker Dirk Morgenroth provided insights on identifying, enabling, and making use of pushed-down capabilities to optimize performance.

Meet the panelist: 

Dirk Morgenroth, Atos
Dirk Morgenroth is a senior consultant for Business Intelligence at Atos, Vienna. As a certified SAP application professional with seven years of hands-on experience in SAP NetWeaver BW, he covers the complete BI lifecycle, and supports clients in defining their BI strategy and in designing, implementing, and maintaining their individual Data Warehouse solutions. In addition to these core consulting activities, Dirk trains externally in Germany, Austria, and Switzerland and teaches at the Vienna University of Economics and Business.

If  you missed the chat or need a refresher, you may view the chat replay or read the edited transcript below.

Live Blog Optimizing SAP BW on SAP HANA

Transcript:

Matthew Shea: Welcome to today’s live Q&A on best practices for optimizing SAP BW on SAP HANA. I am very pleased to have HANA 2017 speaker Dirk Morgenroth of Atos joining us! Dirk will be answering your questions on identifying, enabling, and making use of pushed-down capabilities to optimize performance.

Dirk is a certified SAP application professional with seven years of hands-on experience in SAP NetWeaver BW. He covers the complete BI lifecycle, and supports clients in defining their BI strategy and in designing, implementing, and maintaining their individual data warehouse solutions.

He will be answering your questions on identifying, enabling, and making use of pushed-down capabilities to optimize performance.

Welcome Dirk!

Dirk Morgenroth: Hello! Thanks for being here in the chat with us. As you know, I will attend HANA 2017 in Amsterdam as a speaker in June. Feel free to send questions – I will try hard to answer them. If I cannot answer them in the moment I will get back to this and add information.

And thank you, Matt, for your preparation of this chat.

Matthew Shea: You’re welcome!

Comment From Alvaro: Hello, to achieve better performance, is it necessary to avoid very extensive ABAP codes?

Dirk Morgenroth: Hello, Alvaro. Thanks for your question. If you can, you should actually try to avoid those kinds of extensive and expensive ABAP statements. They’re all executed in the application server and not natively on the HANA database. If you cannot avoid them because “migrating” them, e.g. to SQL script, is too difficult or not feasible, you can often find a lot of options to stay with ABAP and improve those statements. In general, try to specify selected fields in SELECT statements and take good care when you’re working with internal tables. There are a lot more table types than standard internal tables – like sorted or hashed tables.

Comment From mepunu: HANA is one of the most important emerging things in the market today. Where should a BW consultant start from to get a basic overview of HANA?

Dirk Morgenroth: Hi Mepunu. Thank you. You’re totally right – it is in the market and is still emerging. My suggestion is to take some classes on Open HPI (https://open.hpi.de) or Open SAP (https://open.sap.com/). The former provides valuable classes about HANA as a technology and its basics. The latter offers a lot of classes dedicated to real SAP environments, e.g. BW-related environments. You may join a live class that is currently running, or just join a course that was in the past. Aside from these there’s a lot of information out there on YouTube (curated by SAP) and the SAP community.

Comment From Nate: What are some capabilities that are already pushed down in back-end and front-end processing, like in ETL or query processing?

Dirk Morgenroth: Hello Nate – there are quite some capabilities of a regular BW system that are “pushed down” these days. Some are transparent to you – like DSO activation – and some depend on your customization, that is, real implementation efforts. A lot of this actually happens in BW transformations if you focus on regular BW activities inside data flows.

Comment From Brandon: Where and how do you create Advanced DSO’s in HANA Studio, what are some best practices for doing so (naming conventions, groups/packages, etc.), and what is the easiest path for migration from 3.x modeling objects to these Advanced DSO’s?

Dirk Morgenroth: Hello Brandon – thanks for your question. Advanced DSOs are crucial to state of the art BW landscapes. ADSOs are the strategic data targets in the SAP BW world. As for naming conventions, these are typically defined in your company (IT) environment and should be related to them. Having said that, ADSOs can behave like data targets that you may know very well – e.g. DSOs and InfoCubes. One approach is to stay with your current convention analogue to the old objects. The other one tries to mark those objects as ADSOs *and* their behavior. The latter option helps you to distinguish objects and their real nature. Do not forget that you still stick with a number of characters for the naming. As for migration, one approach is to recreate those objects from scratch as ADSOs, e.g. with the help of a template (your initial object). But there are also useful migration paths, especially if you want to go for BW/4HANA. For example, you may then migrate a MultiProvider to a CompositeProvider and InfoCubes to ADSOs.

Comment From 3J: At the back-end level, what is the gain compared to the old BW way of working? I mean, extractors are still the same (except for type ODP instead of API). Could we have a concrete example of a HANA script within transformation that will replace an ABAP routine?

Dirk Morgenroth: Hello 3J. Thanks for your input! You’re right – on the traditional extractor side of things, performance is still dependent on your source. However, everything that happens *after* data has been transferred to the inbound layer of your BW system is subject to potentially HANA-native gains in performance. With the latest BW versions (BW 7.50 and above), HANA script is all over the place – HANA SQL script based expert routines and HANA SQL script based transformation rules. Join-type database statements are accelerated when executed directly on the HDB layer compared to a package-based execution that takes the one or the other roundtrip to the application server.

Comment From Lee: What process do you go through to identify where performance improvements can be made after a customer has migrated to BW on HANA?

Dirk Morgenroth: Hello Lee! Actually, as for what concerns BW performance, you may distinguish between backend performance, which on the application side is typically related to transformations between objects, and frontend performance, which directly affects query processing. You may be well aware of the “traditional” kind of identification -> for ETL and ABAP related processing, that is the transaction SAT / SE10. By setting a break point you can find long running parts/statements, e.g. in your end routine. To discover the transformation that is too slow, you can still use ST13 and drill down to the particular chain that is long running. This approach helped me a lot of times, and there is no real magic about it – it did not change very much when migrating from AnyDB to HDB. You may also use 0TCT_MC21_Q0101 from technical content, which basically provides similar information. As for what concerns frontend, it is actually always about query statistics. You still, as on AnyDB, may want to use RSRT(2) to create statistics from there to identify where the reason for a long running query is.

Comment From Christian: Hello Dirk. We are migrating only the BW database to HANA. Are there any system-defined InfoCubes we need to migrate to be optimized for HANA?

Dirk Morgenroth: Hello Christian – you typically optimize all InfoCubes during technical migration. Even for customers directly serviced by SAP, this is a step done during migration. This typically works pretty well. Some issues may arise with non-cumulative key figures, but typically everything can be managed. Having said that, you may stay with your non-optimized objects for a while, but it is not recommended.

Comment From Shiv: What are the lessons learned while implementing SAP BW on SAP HANA?

Dirk Morgenroth: Shiv, thank you – that’s a huge question, right? One major finding is that technically migrating at the first place, followed by an optimization phase after stabilizing the system, is the right way to go. Some customers stick with old customizations and are pretty happy with the gains in performance, however, thanks to HANA technology. I’d prefer to migrate and then add an optimization phase that is well-structured and a result of a bottleneck identification process – in my opinion, it is key to target those “low hanging fruits” and “quick wins”, i.e. try to improve the things and parts of your solution landscape that really need improvement. Those optimization tasks are not for free; they come at a cost. Either it is you that provides time to do them or it’s your colleagues or it is some external person.

Comment From Josko: What are major optimization tasks you would recommend?

Dirk Morgenroth: Hi Josko, thank you. Please refer to my reply to SHIV. Aside from that it may be interesting to check your query settings with regards to the query execution modes. On the ETL side of things, it is always good to take good care about the batch manager settings, i.e. number of parallel processes. You may be astonished how many DTPs have been set to parallel processes = 1, that is, sequential processing, for the one reason or the other. Check with transaction RSBATCH – this works for AnyDB BWs, too.

Comment From Is there anything …: Is there anything that can be done that directly improves BEX query runtime?

Dirk Morgenroth: Sure, check for the long running queries and their query/InfoProvider settings. Check for any exception aggregations. There’s a lot to be found in RSRT “Execute and Explain” and “Execute and Debug” mode.

Comment From Christian: Thanks Dirk. So when I migrate only the database, for example, from Oracle to HANA, are all dimension tables of InfoCube deleted at the end? And the fact tables will contain SID ID?

Dirk Morgenroth: Hello Christian, thanks for your follow-up. When “optimizing” an InfoCube, the tables below change. Former fact tables (E/F) and dimension tables get reduced to one fact table with four partitions. There is also one dimension table (P) left over. With the SIDs, you’re right – that’s a result of the loss of dimension tables. Querying performance also hugely benefits from this because some expensive DB statements may be left out.

Comment From Christian: If I want use the potential of HANA with web intelligence, are there some tips & tricks that I have to follow? If I don’t use query stripping, I think that I won’t use the benefits of the HANA database. Is this correct?

Dirk Morgenroth: Hello Christian – thanks again for your input. I am not a specialist with WebI, but you should still take good care about the microcubes that are managed (and filled) during Web Intelligence runtime after querying. This is nothing new here, but take care that you do only insert as many free characteristics as you really need. In general, of course, WebI querying speed improves as a result of the faster querying of the database in the first step. I think I read that with one of the more recent BO/BW versions, there is some way to push down microcube operations to the HANA database. Please refer to: https://sapbi.blog/2016/04/…. But this is only for direct access to hana views. Perhaps that may help you – the one or the other critical solution may benefit from exactly this approach by finding a workaround to BW queries. However, there are quite some drawbacks in that case (no Query OLAP features).

Comment From Alvaro: What do you recommend to use, a MultiProvider or a CompositeProvider?

Dirk Morgenroth: Hello Alvaro. If you can you should always use a CompositeProvider for new solutions. Not only are they the “strategic” BW object, but they are also key to future improvements on the SAP side. They include Join capabilities, they support nesting, and last but not least, they are key to mixed scenarios.

Comment From Guest: Hello. What is the element in BW on HANA we should use instead of ‘transformation’? I mean, if we have a complex algorithm in order to treat data, which ‘BW on HANA’ element has to be used? Is an ‘enterprise’ license necessary for it?

Dirk Morgenroth: Hi Guest. The transformation is still key to advanced processing in BW in regards to what concerns data transformation and data flows. Please regard transformations as a vehicle. You may pick advanced SQL Script based functionalities that are included in these transformations on a per-field basis or in an expert routine. Meanwhile, in newer BW versions there is also a mixed stack possible to create a SQL-script based transformation followed by an ABAP based transformation. This is a new feature and supported. Aside from that, you may implement HANA native DB procedures to create datasets. One important aspect is also the HANA based analysis processes.

Comment From Christian: Hi Dirk, is there a minimum support package level needed in order to use all the functionality of SAP BW on HANA?

Dirk Morgenroth: Hello Christian, you may run BW on HANA already with BW 7.30. Having said that, a lot of things changed with 7.40 SP5 and later. I’d recommend going with 7.40 SP8 at least, but it’d be better to go with BW 7.50 SP4. BW 7.50 SP4 is an interesting SP with quite some interesting features.

Matthew Shea: Thank you everyone for posting today! This concludes today’s chat.

Dirk Morgenroth: Feel free to contact me via email. Of course, I also will be onsite in Amsterdam in a couple of weeks.

Matthew Shea: Thank you Dirk for all your insightful answers!

Dirk Morgenroth: You’re welcome, Matt.

Matthew Shea: Looking forward to your sessions in Amsterdam!

I will send everyone an email when the transcript of the discussion is posted.  Have a great day!

After technically migrating to SAP BW on SAP HANA, leveraging capabilities of your SAP BW on SAP HANA system that are pushed down to the SAP HANA database and do not run on the application server is key to a successful optimization road map — and it is a challenging task.

In this live Q&A, HANA 2017 speaker Dirk Morgenroth provided insights on identifying, enabling, and making use of pushed-down capabilities to optimize performance.

Meet the panelist: 

Dirk Morgenroth, Atos
Dirk Morgenroth is a senior consultant for Business Intelligence at Atos, Vienna. As a certified SAP application professional with seven years of hands-on experience in SAP NetWeaver BW, he covers the complete BI lifecycle, and supports clients in defining their BI strategy and in designing, implementing, and maintaining their individual Data Warehouse solutions. In addition to these core consulting activities, Dirk trains externally in Germany, Austria, and Switzerland and teaches at the Vienna University of Economics and Business.

If  you missed the chat or need a refresher, you may view the chat replay or read the edited transcript below.

Live Blog Optimizing SAP BW on SAP HANA

Transcript:

Matthew Shea: Welcome to today’s live Q&A on best practices for optimizing SAP BW on SAP HANA. I am very pleased to have HANA 2017 speaker Dirk Morgenroth of Atos joining us! Dirk will be answering your questions on identifying, enabling, and making use of pushed-down capabilities to optimize performance.

Dirk is a certified SAP application professional with seven years of hands-on experience in SAP NetWeaver BW. He covers the complete BI lifecycle, and supports clients in defining their BI strategy and in designing, implementing, and maintaining their individual data warehouse solutions.

He will be answering your questions on identifying, enabling, and making use of pushed-down capabilities to optimize performance.

Welcome Dirk!

Dirk Morgenroth: Hello! Thanks for being here in the chat with us. As you know, I will attend HANA 2017 in Amsterdam as a speaker in June. Feel free to send questions – I will try hard to answer them. If I cannot answer them in the moment I will get back to this and add information.

And thank you, Matt, for your preparation of this chat.

Matthew Shea: You’re welcome!

Comment From Alvaro: Hello, to achieve better performance, is it necessary to avoid very extensive ABAP codes?

Dirk Morgenroth: Hello, Alvaro. Thanks for your question. If you can, you should actually try to avoid those kinds of extensive and expensive ABAP statements. They’re all executed in the application server and not natively on the HANA database. If you cannot avoid them because “migrating” them, e.g. to SQL script, is too difficult or not feasible, you can often find a lot of options to stay with ABAP and improve those statements. In general, try to specify selected fields in SELECT statements and take good care when you’re working with internal tables. There are a lot more table types than standard internal tables – like sorted or hashed tables.

Comment From mepunu: HANA is one of the most important emerging things in the market today. Where should a BW consultant start from to get a basic overview of HANA?

Dirk Morgenroth: Hi Mepunu. Thank you. You’re totally right – it is in the market and is still emerging. My suggestion is to take some classes on Open HPI (https://open.hpi.de) or Open SAP (https://open.sap.com/). The former provides valuable classes about HANA as a technology and its basics. The latter offers a lot of classes dedicated to real SAP environments, e.g. BW-related environments. You may join a live class that is currently running, or just join a course that was in the past. Aside from these there’s a lot of information out there on YouTube (curated by SAP) and the SAP community.

Comment From Nate: What are some capabilities that are already pushed down in back-end and front-end processing, like in ETL or query processing?

Dirk Morgenroth: Hello Nate – there are quite some capabilities of a regular BW system that are “pushed down” these days. Some are transparent to you – like DSO activation – and some depend on your customization, that is, real implementation efforts. A lot of this actually happens in BW transformations if you focus on regular BW activities inside data flows.

Comment From Brandon: Where and how do you create Advanced DSO’s in HANA Studio, what are some best practices for doing so (naming conventions, groups/packages, etc.), and what is the easiest path for migration from 3.x modeling objects to these Advanced DSO’s?

Dirk Morgenroth: Hello Brandon – thanks for your question. Advanced DSOs are crucial to state of the art BW landscapes. ADSOs are the strategic data targets in the SAP BW world. As for naming conventions, these are typically defined in your company (IT) environment and should be related to them. Having said that, ADSOs can behave like data targets that you may know very well – e.g. DSOs and InfoCubes. One approach is to stay with your current convention analogue to the old objects. The other one tries to mark those objects as ADSOs *and* their behavior. The latter option helps you to distinguish objects and their real nature. Do not forget that you still stick with a number of characters for the naming. As for migration, one approach is to recreate those objects from scratch as ADSOs, e.g. with the help of a template (your initial object). But there are also useful migration paths, especially if you want to go for BW/4HANA. For example, you may then migrate a MultiProvider to a CompositeProvider and InfoCubes to ADSOs.

Comment From 3J: At the back-end level, what is the gain compared to the old BW way of working? I mean, extractors are still the same (except for type ODP instead of API). Could we have a concrete example of a HANA script within transformation that will replace an ABAP routine?

Dirk Morgenroth: Hello 3J. Thanks for your input! You’re right – on the traditional extractor side of things, performance is still dependent on your source. However, everything that happens *after* data has been transferred to the inbound layer of your BW system is subject to potentially HANA-native gains in performance. With the latest BW versions (BW 7.50 and above), HANA script is all over the place – HANA SQL script based expert routines and HANA SQL script based transformation rules. Join-type database statements are accelerated when executed directly on the HDB layer compared to a package-based execution that takes the one or the other roundtrip to the application server.

Comment From Lee: What process do you go through to identify where performance improvements can be made after a customer has migrated to BW on HANA?

Dirk Morgenroth: Hello Lee! Actually, as for what concerns BW performance, you may distinguish between backend performance, which on the application side is typically related to transformations between objects, and frontend performance, which directly affects query processing. You may be well aware of the “traditional” kind of identification -> for ETL and ABAP related processing, that is the transaction SAT / SE10. By setting a break point you can find long running parts/statements, e.g. in your end routine. To discover the transformation that is too slow, you can still use ST13 and drill down to the particular chain that is long running. This approach helped me a lot of times, and there is no real magic about it – it did not change very much when migrating from AnyDB to HDB. You may also use 0TCT_MC21_Q0101 from technical content, which basically provides similar information. As for what concerns frontend, it is actually always about query statistics. You still, as on AnyDB, may want to use RSRT(2) to create statistics from there to identify where the reason for a long running query is.

Comment From Christian: Hello Dirk. We are migrating only the BW database to HANA. Are there any system-defined InfoCubes we need to migrate to be optimized for HANA?

Dirk Morgenroth: Hello Christian – you typically optimize all InfoCubes during technical migration. Even for customers directly serviced by SAP, this is a step done during migration. This typically works pretty well. Some issues may arise with non-cumulative key figures, but typically everything can be managed. Having said that, you may stay with your non-optimized objects for a while, but it is not recommended.

Comment From Shiv: What are the lessons learned while implementing SAP BW on SAP HANA?

Dirk Morgenroth: Shiv, thank you – that’s a huge question, right? One major finding is that technically migrating at the first place, followed by an optimization phase after stabilizing the system, is the right way to go. Some customers stick with old customizations and are pretty happy with the gains in performance, however, thanks to HANA technology. I’d prefer to migrate and then add an optimization phase that is well-structured and a result of a bottleneck identification process – in my opinion, it is key to target those “low hanging fruits” and “quick wins”, i.e. try to improve the things and parts of your solution landscape that really need improvement. Those optimization tasks are not for free; they come at a cost. Either it is you that provides time to do them or it’s your colleagues or it is some external person.

Comment From Josko: What are major optimization tasks you would recommend?

Dirk Morgenroth: Hi Josko, thank you. Please refer to my reply to SHIV. Aside from that it may be interesting to check your query settings with regards to the query execution modes. On the ETL side of things, it is always good to take good care about the batch manager settings, i.e. number of parallel processes. You may be astonished how many DTPs have been set to parallel processes = 1, that is, sequential processing, for the one reason or the other. Check with transaction RSBATCH – this works for AnyDB BWs, too.

Comment From Is there anything …: Is there anything that can be done that directly improves BEX query runtime?

Dirk Morgenroth: Sure, check for the long running queries and their query/InfoProvider settings. Check for any exception aggregations. There’s a lot to be found in RSRT “Execute and Explain” and “Execute and Debug” mode.

Comment From Christian: Thanks Dirk. So when I migrate only the database, for example, from Oracle to HANA, are all dimension tables of InfoCube deleted at the end? And the fact tables will contain SID ID?

Dirk Morgenroth: Hello Christian, thanks for your follow-up. When “optimizing” an InfoCube, the tables below change. Former fact tables (E/F) and dimension tables get reduced to one fact table with four partitions. There is also one dimension table (P) left over. With the SIDs, you’re right – that’s a result of the loss of dimension tables. Querying performance also hugely benefits from this because some expensive DB statements may be left out.

Comment From Christian: If I want use the potential of HANA with web intelligence, are there some tips & tricks that I have to follow? If I don’t use query stripping, I think that I won’t use the benefits of the HANA database. Is this correct?

Dirk Morgenroth: Hello Christian – thanks again for your input. I am not a specialist with WebI, but you should still take good care about the microcubes that are managed (and filled) during Web Intelligence runtime after querying. This is nothing new here, but take care that you do only insert as many free characteristics as you really need. In general, of course, WebI querying speed improves as a result of the faster querying of the database in the first step. I think I read that with one of the more recent BO/BW versions, there is some way to push down microcube operations to the HANA database. Please refer to: https://sapbi.blog/2016/04/…. But this is only for direct access to hana views. Perhaps that may help you – the one or the other critical solution may benefit from exactly this approach by finding a workaround to BW queries. However, there are quite some drawbacks in that case (no Query OLAP features).

Comment From Alvaro: What do you recommend to use, a MultiProvider or a CompositeProvider?

Dirk Morgenroth: Hello Alvaro. If you can you should always use a CompositeProvider for new solutions. Not only are they the “strategic” BW object, but they are also key to future improvements on the SAP side. They include Join capabilities, they support nesting, and last but not least, they are key to mixed scenarios.

Comment From Guest: Hello. What is the element in BW on HANA we should use instead of ‘transformation’? I mean, if we have a complex algorithm in order to treat data, which ‘BW on HANA’ element has to be used? Is an ‘enterprise’ license necessary for it?

Dirk Morgenroth: Hi Guest. The transformation is still key to advanced processing in BW in regards to what concerns data transformation and data flows. Please regard transformations as a vehicle. You may pick advanced SQL Script based functionalities that are included in these transformations on a per-field basis or in an expert routine. Meanwhile, in newer BW versions there is also a mixed stack possible to create a SQL-script based transformation followed by an ABAP based transformation. This is a new feature and supported. Aside from that, you may implement HANA native DB procedures to create datasets. One important aspect is also the HANA based analysis processes.

Comment From Christian: Hi Dirk, is there a minimum support package level needed in order to use all the functionality of SAP BW on HANA?

Dirk Morgenroth: Hello Christian, you may run BW on HANA already with BW 7.30. Having said that, a lot of things changed with 7.40 SP5 and later. I’d recommend going with 7.40 SP8 at least, but it’d be better to go with BW 7.50 SP4. BW 7.50 SP4 is an interesting SP with quite some interesting features.

Matthew Shea: Thank you everyone for posting today! This concludes today’s chat.

Dirk Morgenroth: Feel free to contact me via email. Of course, I also will be onsite in Amsterdam in a couple of weeks.

Matthew Shea: Thank you Dirk for all your insightful answers!

Dirk Morgenroth: You’re welcome, Matt.

Matthew Shea: Looking forward to your sessions in Amsterdam!

I will send everyone an email when the transcript of the discussion is posted.  Have a great day!

More Resources

See All Related Content