Management
Johns Hopkins University and Heath System runs a complex SAP environment that includes financial, supply chain, and HR solutions and more than 10,000 users. Learn how the training demand for those users led Johns Hopkins to move from its original demo-only class structure to build a standalone training client in which students can get hands-on experience in the SAP system.
Johns Hopkins operates and manages hospitals throughout the world and is best known for Johns Hopkins University and Johns Hopkins Health System in Baltimore, MD. In 2007, Johns Hopkins voyaged into the world of SAP functionality with a big-bang implementation of SAP R/3 4.7, SAP Supplier Relationship Management (SAP SRM) 4.0, SAP Business Warehouse (SAP BW) 3.5, SAP Enterprise Portal (SAP EP) 6.0, SAP Exchange Infrastructure (SAP XI) 3.0, and SAP Solution Manager 3.0. In its unique structure, it has merged two separate corporations, Johns Hopkins University and Johns Hopkins Health System, into a single, unified, and stable SAP environment.
Johns Hopkins, ranked as US News and World Report’s top hospital for the past 20 years and world-renowned for patient care, research, and teaching, has a predictably strong focus on its own internal training. When its initial SAP implementation went live, it was quick to create a training framework for its SAP users, and after real-world experience with the system, quick to realize that that framework needed to be revamped. They moved from teaching through lectures to hands-on training thanks to a home-grown training client built by Johns Hopkins’ IT organization with internal resources.
Note
In 2010, Johns Hopkins moved to the environment it runs today by upgrading to SAP ERP Central Component (SAP ECC) 6.0 EHP 4, SAP SRM 7.0 EHP 1, SAP NetWeaver BW 7.0 EHP 1, SAP NetWeaver Portal 7.0 EHP 1, SAP NetWeaver Process Integration (SAP NetWeaver PI) 7.1, and SAP Solution Manager 7.0 EHP 1, and implementing SAP NetWeaver Master Data Management (SAP MDM).
To that end, after its initial SAP implementation, Johns Hopkins created a training framework with lecture-style classes that took place once a week in which instructors presented demos using the QA and Production environments. Classes focused on SAP SRM, travel, and online check requests — the areas in which many of Johns Hopkins transactions occurred. Occasional classes and one-on-one training was provided for some of the less widely used transaction areas such as HR or petty cash.
This training framework served Johns Hopkins users for two years, but there were several aspects of it that were not ideal. First, students were not able to get widespread hands-on training. If students needed hands-on instruction, they had two options, both of which had to take place outside of class: they could request that an instructor walk them through a transaction in the Production environment or they could sit with an instructor to test out transactions in the QA environment. The former relied on the need for a real transaction to be run, limiting students that needed to learn a specific task. The latter took place in the same environment IT would test in, so the data students were posting could skew the results of tests IT was executing at the same time.
Security was also a concern when allowing students to work in QA. Only IT and Training users had access to QA, so for a student to work in QA they would need sit directly with a trainer, also one-on-one. Since data in QA was real-world data copied from Production, IT had to pay more attention to security in QA than expected. Ultimately, IT would have to spend a significant amount of time maintaining the training team’s ID’s.
“We were colliding with each other, if you will,” says Rose, “We had to spend a fair bit of extra time monitoring security in an environment that we didn’t expect to have to do that in, and the training group was finding it unreliable to be working in our development systems, which at our discretion would be unavailable to them.”
Communication between the two organizations became difficult. Though classes were scheduled months in advance, system downtime wasn’t. If planned maintenance wasn’t clearly communicated by IT to training, classes that took place in the affected environments risked being cancelled. Additionally, if the system looked different from one day to the next, instructors would have to contact IT to determine why. IT would have to look into the issue to determine the cause — for instance, enhancements on the system had changed the way a screen looked. The amount of constant communication the two organizations had to be in with each other to make the training framework successful was proving too great.
“In post-implementations surveys and based on our real-world hands-on observations, we saw that training was not excellent and that we needed to enhance our training opportunity for those coming in for the first time,” says Rose. “We wanted to give Training an area where they controlled the schedule, the general concept of what that system looked like, and what security was there, as opposed to being at our discretion for all of that.”
The obvious solution was to create a standalone client for training classes. The client would be available to the training organization on a defined schedule, approved by the training teams, and would be separate from a live system, allowing students to have unlimited hands-on experience.
Though the need for such a training client was obvious, convincing management to dedicate the hardware investment and IT resources for a project with such an intangible benefit was a difficult task. “It’s like six degrees of separation,” says Rose. “It was hard to determine the ROI based on the fact that this week we trained 10 more people who are now able to go do their job more efficiently.”
The training organization independently contracted a consultant to get an idea of how much of the training budget would be required to have outside resources build a client. “That number came back very high, of course, given that it was paying consultants versus leveraging our internal resources,” Rose says. “That was basically our motivation to do this in-house. Just like a home improvement, if you’re handy enough you can do things on your own.”
Although it would take more time to build this client in-house than it would to contract the job out, the cost savings, paired with the fact that Johns Hopkins already had knowledgeable resources that could take on the project, were the final factors in deciding to build the client internally. “We, in IT, are a service provider, and Training is a customer of ours, so we basically agreed to fulfill that customer request. We allocated the necessary resources internally to manage the implementation and collaborate with our users.”
Examining Their Options
The project team narrowed down its options for building the standalone client to two choices: use a full copy of Production or time slice SAP R/3. The simplest solution would be the former: standing up a full copy of Johns Hopkins’ Production environment, including a full copy of SAP R/3, SAP SRM, SAP BW, and SAP NetWeaver EP, would best accomplish their goal. Unfortunately, that solution was problematic for two reasons. First, the sheer size of the environment was too large. Johns Hopkins wanted to keep down its hardware costs and supporting an environment that size would require significant additional resources. Second, this would not solve the security risk posed by students working directly in a live system. Supporting the user IDs of every student that would need access to the training system would be a large drain of IT’s time.
Taking just a time slice of the SAP R/3 system, however, would allow the project team to copy just a certain period of data, including all transactions from that time period, instead of supporting a full copy of Production. Moreover, they wouldn’t have to bring in new tools, as a tool they were already using to scramble HR data, Data Sync Manager by EPI-USE, also had a component called Client Sync that could time slice data, though they had never used that functionality before.
The data scrambling component of Data Sync Manager, called Object Sync, allowed IT the peace of mind to know that students would not be able to see any real HR information included in the transactions copied to the training client. Personal information including salary, social security numbers, positions, addresses, and phone numbers were replaced with fake information so that when a student saw it, he or she wouldn’t be able to identify the person.
Therefore, the project team, which included about 10 functional members and six technical members, chose to time slice four months of SAP R/3, take a one-time extraction of SAP BW and SAP NetWeaver EP, and use a full copy of SAP SRM to build the training client, named TRN800 (Figure 1). Third-party consultants were brought in to teach the team the time slice functionality of Data Sync Manager. At the time, the tool didn’t have the functionality to time slice SAP SRM, which is why the team had to stand up a full copy of SAP SRM next to the time slice of SAP R/3. Today the tool has that functionality.

Figure 1
The project’s functional design
The choice to time slice four months of data resulted in a much smaller database size than a full copy of Production would have required (Figure 2). Additionally, this option relieved security concerns as the training client would be separate from live environments, freeing the students to learn though trial and error without consequence.

Figure 2
Johns Hopkins’ Production client size compared to TRN800 (EB = enterprise buyer BW = SAP Business Warehouse)
Hardware was “repurposed,” as Rose put it, for the new client. Older pieces were selected from Johns Hopkins’ collection of hardware that has been used to support the entity over the years. “We didn’t try to buy the best and fastest equipment because a big point of this was that we could repurpose something that was a little bit older and not the fastest around. That’s all we needed because we were putting a small SAP system on there, not this huge Production-sized system,” he says.
To transact in the system, generic user IDs were created for students to use during class. Each of these generic IDs were assigned a few business roles from the areas of finance and supply chain. In all, 30 copies of the same ID were created to ensure enough data was available to teach the same classes, which held an average of 10 to 12 students each, several times. For Health System classes, 80 generic user IDs were created to allow for higher user numbers. With these IDs, a class could sit at their own stations, sign in to the system, and walk through transactions together, but each working independently.
Client refreshes were scheduled for every two weeks, though later that was changed to monthly. Refreshes run on Fridays, the one day of the week on which training generally isn’t scheduled. These refreshes are important to reset the data that had been consumed by prior classes.
The project team set up a transport schedule in which break fixes and enhancements are moved to TRN800 after they are moved to Production. Every two weeks, transports are applied to TRN800 in bulk. That means the training client lags Production by anywhere from two to four weeks. The alternative that some pushed for was moving changes to TRN800 as they are developed in QA so instructors could train users on the enhancements before they are live in Production. That would add unnecessary instability to TRN800, though, and besides, Rose says, most changes did not affect the courseware so the updates weren’t noticeable to Training.
“When we’re seven, eight, nine years down the road it might make sense that we change this approach and start building enhancements into the training client before they’re in Production,” Rose says. “But in our situation right now, adding them after the fact is best for us.”
Instructors therefore have the choice to either incorporate changes due to enhancements and fixes into training once the changes are moved to TRN800, or demo the changes as they’re made in QA or Production for the students, who wouldn’t be able to get hands-on experience with the changes. This arrangement has been a success thus far, according to Rose.
Demoing Johns Hopkins’ live QA system, as instructors did in the original training framework, is still an option during training classes. In fact, it’s the contingency plan for TRN800. The project team created 10 personal user IDs for the QA environment for use by instructors, so if the training client goes down, classes can revert to demo mode instead of being cancelled.
Post-Op Analysis: Success
TRN800 has been a success for Johns Hopkins. The client has advanced training from lecturing students on how transactions are run, to providing them with the means to log into the system and personally execute the lessons they learn in class. Students now have the ability to stay after class if they wish to try out the system, a big improvement over the original framework in which that would be a concern because students would be working in the QA environment.
“Now when, say, a user needs to create a check request that they don’t know how to process, they can come and try to do it in the training client and make a mistake without any repercussions,” Rose says.
The Training and IT organizations can now work autonomously from each, making life easier for both. IT doesn’t have to warn Training every time a system is going down, and Training doesn’t have to worry about scheduling classes around system maintenance. While IT still owns and maintains the client, Training operates in it freely, unaffected by IT’s day-to-day activities.
Reaction to the new training system has been positive, and instructors are pleased with the dedicated client. “Copying the master data and configuration from Production, using a time slice only, has proven to be the best possible solution for the education of our users,” says Bob Sicoli, an instructor at Johns Hopkins. “Our training group was consulted throughout the process and entered and tested the scenarios for our classes. The training client enables our learners to practice with controlled scenarios that model the exact business process.”
Like most creators of an inaugural project, however, Rose and his team are already looking at how they can create a new, bigger and better version of the training client. For starters, the data is now years old. Training is using data from 2008, which some view as too dated. “That’s sort of an arguable discussion,” Rose says. “Does it matter that it says 2011? Some say it does, some say it doesn’t.”
The biggest motivation for building a new client, however, is having a larger slice of data. As the client stands now, the four months of data it includes isn’t enough to teach users year-end functionality, such as closing books. The training team would like to use a time slice of 18 months of data, an amount the young SAP system didn’t even have when TRN800 was created, in a new client to include a full calendar year and a full fiscal year of data.
This new dream client comes with its share of obstacles, though. Hardware requirements and costs to support a client with 18 months of data would, of course, be much higher than what TRN800 required. The time it would take to build a client with 18 months of data is another consideration. The initial project plan was four months to build a client that included four months of data; building one that includes 18 month of data would be a serious time commitment. Since building TRN800, however, Rose and his team have used the same process to build other clients so they’re more familiar with the use of time-sliced data today.
If and when they do build a new training client, the project team will do some things differently based on the lessons they learned from this project. They were held up a few times over the course of the project mostly due to insufficient preparation. The project team initially built a four-month project plan, following ASAP methodology, which kicked off at the end of April 2008. However, the project wrap up until January 2009, taking almost twice as long as projected. In the end, four complications each added a month to the project:
- Client Sync learning curve. Because IT was already familiar with Object Sync’s scrambling functionality, they underestimated the amount of time it would take to get up to speed on the Client Sync’s time slice functionality. While the data scrambling portion of the project went according to plan, learning to time slice on the job required more time than expected and required additional support.
- Additional scrambled data and testing. Initially, the project plan called for scrambling only internal HR data. During the project testing, however, a discussion about the importance of hiding personal vendor data led to a decision to scramble that information as well. As a result, IT had to manually scramble data such as freelancers’ home addresses and vendors’ bank account information that Object Sync didn’t have the functionality to scramble at the time. This added additional time to the project as did the increased testing that needed to be done as a result.
- SRM and HR data alignment. Prior to the project, the team didn’t consider the alignment of scrambled HR data with non-scrambled SRM data. Consequently, the SRM org structure, filled with actual Johns Hopkins personnel, didn’t match the HR org structure, which was filled with fake names. This would cause issues if students tried to run any tasks that associated that data, such as creating workflows or goods receipts. To remedy this, the project team had to spend extra time replacing the org structure in SRM with the data from HR in the training client.
- Holiday avoidance. While the project was final in November, the project team thought it wise to wait until January 1 to go live with the client, instead of trying to roll it out around chaotic holiday schedules.
“Plan for the future,” Rose advises others undertaking such a project. “We sort of did the minimum, and if we had envisioned the future we might not be considering rebuilding it as quickly as we are. Try to define what may be on a two- to five-year horizon and you might find that you don’t have to rebuild it so quickly."
“There’s a lot of reasons why we were a little short-sighted — we wanted to do it fast and we wanted to do it as cheap as possible. It’s debatable whether even if we had done more, we wouldn’t still be looking at rebuilding it now, but there are certain things that we can’t do that would have been nice to have been able to do without relying on whether we rebuild the client or not,” he says.
Laura Casasanto
Laura Casasanto is a technical editor who served as the managing editor of SCM Expert and Project Expert.
You may contact the author at lauracasasanto@gmail.com.
If you have comments about this article or publication, or would like to submit an article idea, please contact the editor.