How GSA revolutionized asset management and reduced annual IT O&M costs
After partnering with ICF to build and deploy a unified, scalable solution to automate logistical functions, the GSA has seen a reduction in IT O&M costs by $2M annually. Hear from the project stakeholders as they describe the revolutionary cloud-based system that changed the game for how the Office of Facilities Management handles its workload:
Presenters
- David Marcus - Program Management Director, ICF
- Fernando Castelli - IT Program Manager, GSA
- Michael Hoffman - Program Manager, GSA
Transcript
David Marcus : Good afternoon. With everyone in listen-only mode, I think-- I hope everyone can hear me. My name is David Marcus. And I will be joined here today by my GSA counterparts Fernando and Mike. And we'll be going over our story for GSA of how we revolutionize their asset management system and produced annual costs for the program.
So I am the program manager for ICF. I've been on this program since the start. And I lead the team or which is the architect of GSA's asset management system. I will hand it off to Mike and Fernando to introduce themselves. Mike.
Michael Hoffman: Thank you, David. So this is Michael Hoffman. And as David mentioned, I'm a program manager with US General Services Administration. I work for the Office of Facilities Management. So I am the business line program manager for National CMMS. Fernando, do you can introduce yourself?
Fernando Castelli: Yes. Thank you, Mike. This is Fernando Castelli. I am the IT project manager for National CMMS. I work with the GSA office of the chief information officer as a liaison on any IT related issues that are required for NCMMS.
David Marcus : All right, thank you, Mike and Fernando. So today we're going to be going over what the NCMMS is, how we've standardize, optimize, and built up the architecture for the program. We'll be going through some of the solutions and impacts that we've had on the program. And as this is a webinar, we will be requesting that you hold off on questions for a Q&A session at the end, but feel free to type in the questions in chat box on your screen as questions arise that we will get to the end of the session.
OK. As we stated, lines will be muted for the time being. This is a recording, so feel free to share this with your peers or if you want to reference this back again, we'll be sending the information out after the session. And if there's anything else that piques your interest and you want to hear more from our team on, feel free to reach out and we will see if we could set up that in a future session or answer any questions that you might have.
So what is the NCMMS? I am going to hand it over to Mike and Fernando who predated my time here before we went live with NCMMS and tell you about how GSA was doing facilities management before this program went under way.
Michael Hoffman: Thank you, David. This is Mike, again. And so before we came into the National CMMS, what we had was a sort of a patchwork of different systems. On the diagram, you can see there's a number of GSA's 11 regions that are colored in blue. At the top, you'll see Region 6. And you'll see Regions 10 and 9 on the West Coast. All of these regions that you see with the blue boxes had their own region-provided CMMS system. So you see in one case, we had a system called PMA, and a couple of cases we had Maximo.
Another case we even had a custom in-house solution. So we had a patchwork sort of on the left half the West side of the map. And on the East side with the exception of Region 11, which is the Washington, DC area, we had a number of contract provided systems which meant that the mechanical teams who were contracted to maintain our buildings came in with their own system which was anything from a CMMS to a spreadsheet or something else.
And so that net legacy situation if we wanted data about how things are going in a given building, we would put out a big data call and get a bunch of spreadsheets back and try to merge it. So in that old system, we didn't have consistency, for example, how you name things. So it's hard to look across the portfolio and see what was going on.
And one of the really large downside especially in these regions where the contractors provided systems, those contracts typically last five years. So the contractor would come in, load data either from scratch or perhaps from a previous contractor. And then when they leave, you'd lose a lot of that richness of the data and the equipment histories. That was a problem for us.
Again, we're trying to run a unified portfolio across our 50 states and 11 regions. And that was really sort of the motivation to get all of this data in one place, get it consistent, get the processes consistent, and move forward the national system. Next slide, please.
Thank you. So we're now on slide 6. This is just a little bit. I'm sure we have a wide range of folks listening today. So this is just a real quick look at what a CMMS or a computerized maintenance management system does for you and what ours does for us specifically. Our system is built on Maximo. That's an IBM product.
And the core of this system is asset management, in this case, asset refers to major maintainable equipment in your facilities, it could be chillers, could be boilers, things like that. And we manage something North of a million assets across our roughly 1,500 buildings.
And we do day-to-day work order Management. Tens of thousands to hundreds of thousands of work orders flowing through every month. So it's a pretty large amount of data we're tracking. We're also integrated looking at the upper left of the diagram to something called GSA link, which is a building analytic system. It talks to our building automation systems.
Think of it as like a super powered thermostat. It's running everything in that building, keeping track of humidity and air quality and controlling your fans and your set points and all that sort of stuff. Well, we talk to that system. And if that system finds something it's out of range or perhaps you have something like a-- you're letting in too much fresh air for some reason, maybe you have a stuck damper or something. GSAlink will analyze that sort of thing and send us-- some called a spark or a Special work order say, hey, I'm analyzing this. I see a problem in the building. I request-- here's a work order requesting you check it out and close the loop.
Looking on the left of the diagram, we are used to plan and track preventive maintenance. Again, across these-- we have like I said over a million assets. The assets require preventive means this is some lower than that but it's still hundreds of thousands of assets that we actively maintain and track-- of course, we track down break down maintenance and customer requests or tenant requests. And we have our rich reporting capability as well as our Mobile Maintenance capability.
So again, you can think of a CMMS generally speaking, it's the logistics heart for what you're doing to maintain your facilities. And in GSA, the National CMMS is the system of record for tracking all the work done under the service contracts in the building, again the mechanics, and the custodial staff, the landscapers. All those folks are service contracts and we don't necessarily track when they trim-- trim some things in the garden but whether it's elevators or major equipment or spills all that stuff flows through the NCMMS for tracking and SLA reporting and tracking which we also share with our tenants.
Next slide, please slide 7. OK. So I have alluded to this already, foreshadowed but what our mission is, again it's about consistent nationwide CMMS and a supporting policies, procedures, and programs around that. And including a lot of work on contracts with hundreds of contracts, again to make them line up with our policies to use the NCMMS the way we need it used.
And we provide complete, consistent, accurate, equipment inventories, and that's been quite a journey in itself and of course, the maintenance histories around that equipment so you can look for trends and problems. Again, consistent easy to use efficient, so work order processing, equipment inventory tracking-- we don't do those data calls like we used to, we can just pull it up in a matter of minutes, we can get the data that used to take weeks to get.
And go to the next point that really supports our data-driven decisions at all levels. And we support rolling up to the whole portfolio to drilling down to a region, to a service center within a region to a given building and on down as deep as you need to go. And we can again, with the capabilities of this system and our reporting tools, we can slice the data as we need to answer questions, whether it's from our own executives from our building operators even sometimes from auditors or congressional people needing information.
Again, we provide that consistent maintenance across PBS, so integrate with our system we have job plans and guides that we've loaded to the system and support working with engineers to update those periodically. Again, to get that consistency across our system and both for quality assurance, quality control. And getting that consistent with all these contracts and something north of 2 billion a year in contracts that we're using to maintain and keep these buildings running and again, that's all flowing through the CMMS tracking. And again as I mentioned this is a single solution for PBS, so it's unified.
Next slide, please. Go to slide 8. OK. Thank you. So briefly on our timeline, just the big boxes here-- really kicking it off in 2012 but we didn't just jump in and implement a system, we spent a lot of time working with our regional teams, external subject matter experts. And we developed a number of SOPs or operating processes for our 1,500 buildings, getting that data in, again it came in everything from a spreadsheet to sometimes PDFs which-- if you work with data that's a tough way to load thousands of records.
But we got it loaded and we continue to refine it with a portfolio this size is a never ending process but and we coordinated by-- I worked early on with a number of our contracting officers across the regions to get our contracts updated, again, to lineup processes and requirements around this so that's 2012 the 2014. Our rollout phase really began, it's the 2015, 2016 period, we in short order got authority to operate.
David or his predecessors team, we're very strategic about that and we manage-- I think Fernando speak to it later but managed to get authority to operate in something under-- at least preliminary under six months, we were up and running which was quite fast or for our organization. We did multiple releases even in the early days as it was a waterfall approach but it was broken into multiple releases because there is so much to do to organize that work.
We've got our first buildings flowing work through the system in March of 2015 and then for our PBS commissioner, that's the top person an organization, put out an order basically saying, I want work orders flowing in all of our own buildings by the end which is September for us of 2016. So that's our big Nose Days for the big effort was a major features being added to releases and configuration. And in '17 to 2018, we really got after enhancing the system. We added tenant surveys.
In 2017, we integrated which-- yes, I think again that's our building automation system in 2017. We upgraded in 2018 to a more recent version of maximizing that every couple a few years, I expect we're doing that and it's a big effort for David and his team. We got our first mobile app users in that time period.
We updated our preventive measures guide which is, again, the overall guidance to how to maintain our equipment and it's tightly coupled with how we go into the NCMMS. Actually, we keep that guide in the NCMMS and we publish from there now, it's like it's not a separate word documents all in our database. And if somebody is a word copy, we actually pull it out of the database and transform it.
And we continue to improve that on inspect-- there's a really good effort done by a team of 2018. To improve that and they tighten it up and actually raise the bar for the use of NCMMS as one of our core systems. So in the '19 surprisingly to 2021, at this point, because we're almost to then of our 2020, because of the way our fiscal year works. But we've been increasing our value, we are tracking national key performance indicators, and we're working hard with David and others to improve our usability at some of those work centers which is just a nicer modern user interface.
We're using something called Toby which I won't go into but it's a way to use a standard to get the data. And we're talking tonnes of thousands of lines of data, again, from each of our big construction projects and follow them efficiently into national CMMS. Again, that's a huge, huge improvement that we've worked with our partners in designing construction. We start to crawl through the buildings and try to get this information off the equipment boiler place and now it's this automated flow. We have customer dashboards and we'll be doing tenant tickets in 2021 where they can put in service request.
Next slide, please. Slide 9, OK.
David Marcus : Yeah. Thank you, Mike. So hopefully, that brings you up to speed on the before picture and some of our current day operations. I wanted to give the big picture here looking at some data points and you can see how robust the size of this implementation actually is. The ICF team managed to build a system that can both evolve and grow with the user base implementing GSA's mission. Working with key stakeholders to customize a commercial off the shelf product which Mike just took us through as IBM's Maximum for all the facilities management needs.
With over 20 million square feet of building data, GSA needed a solution that could accommodate an application of this magnitude. We house over 1,500 buildings and up to date store more than 500 million work orders in our system. We have custom tailored screens for a user base which consists of over 2000 employees and contractors. As this program has been live in production, I think for a little over five years now. We continuously work with Mike and Fernando and the GSA team to advance with both the IBM product side from a technology standpoint and meeting all the customer and user needs.
Looking at ICF GSA operations. I'm extremely proud of the work that GSA and my ICF team has done. Mike took us through the history but we were able to deliver over 800 initial requirements within a span of 14 months to stand up GSA and production. At first, we operated under a waterfall project management methodology and over the years, we've progressed to very agile team. We worked with GSA to prioritize their enhancement backlog and roll out iterative sprints at their discretion to our Kanban board.
Our team also manages five environments with one upper and we make sure to follow ICF best delivery practices for software development and also for data migration. As the product owner, our team operates a tier 3 helpdesk as well and provides subject matter experts to troubleshoot complex issues. We have customized different workflows and software programs to track change request and maximum support. Our team also personalized the ServiceNow platform to feed into our JIRA ticketing system. This gives us requirements traceability with GSA and the ability to search and report against the issues that come in. This holistic approach enables us to create a knowledge base of reporting structures program continues to grow.
I like to also point out the ICF team has deployed the largest federal maximum cloud project to date. And I want to draw attention to how much data we actually have on her system. As you can see under our delivery best practices on the diagram, we realize the need to migrate our data and for that we use our Optum database. We've since moved over half a billion records in the last year to the archive that consist of data that is over 13 months old. And GSA isn't just throwing it in the archive and not looking at it ever again, they have the ability to plug into our database and reporting on site data.
Moving on to the next slide. Here, we're going to go over the NCMMS architecture and I'm going to hand it off to, Fernando, to walk us through what it took to set this up and some key facts about the architecture.
Fernando Castelli:: Great. Thank you David. This is Fernando, again. And these diagrams are really not necessarily that pretty to look at so I just gave a brief overview of what we accomplished at least from my perspective. This program will be in it's first at PBS to be a solution that was going to be centralized on AWS on the cloud. That presented some challenges that we were able to successfully complete and take care of it in actually a much shorter time frame than we expected.
We did start with our limited authority to operate, which took about six months is usually about a year process. We were able to accomplish that within six months, that gave us the capability to start installing, configuring, and CMMS, as an application and gave us an extension of, basically, one year to achieve a full authority to operate by addressing all of the adequate security controls through our system security plan, which is definitely created in accordance with NET and FIPS. With NET's guidelines on FIPS categorization, we were able to-- NCMMS, it's an application that can be accessed anywhere, from any device. We were able to use our internal secure off for two factor authentication into our system and we use Active Directory. We use the currently Active directory for GSA for all management. It does basically an overall from that perspective, we were able to basically get going at less than 18 months, which is an incredible accomplishment considering this was the first time for all of us.
Michael Hoffman: Thank you, Fernando. That was a great overview, I'd also like to point out. And for those of you who have AWS experience or are partners on the line here. We not only provide hosting solutions to enable us for NCMS, but we provide AWS architects who continuously ensure the solutions engineered to maximize all of security uptime and availability. We leverage multiple AWS availability zones and regions to deliver automated fell over. We also provide backup and restoration capabilities through a number of software programs.
Oops. I went a little too faster. OK. Next we're going to be looking at some of the innovations, we've rolled out over the years and we're currently deploying here at GSA. This next screen takes you to-- what we call the maximum start center. It's the first screen when somebody logs in from the desktop application. Our team, the ICF team, works with GSA to customize over 20 different start centers to meet the needs of each type of user in the system.
Mike took you through a couple of types of end users but working with the business side, we really ironed out what each user needs to see it to maintain and complete their work in the most efficient manner. The distinct security groups are important for differentiating end users and the first screen that they're looking at. In the above image specifically, we're displaying our results sets from our integration disguise work. So I don't know the pointers working over here you can see.
This user has got a number of roles that they have access to from a security perspective. So this O&M contractor is showing the port lifts for, what Mike alluded to, at the top of a call where the integration point to sky spark. Sky spark pushes ever sensor data points to Maximo when their values go out the side of a certain range for us to track. And having the ability to see the most important work for each user when they log in is a key feature that we provide to GSA.
OK. Moving on from the heart center. Next, we're going to talk a bit about reports and how data drives, our decisions here at GSA. The GSA team worked with each of our 11 regions to track, monitor, and optimize key performance indicators, and maximize a number of reporting capabilities, which I'm not going to cover in its entirety in this format. But the inherent reporting application is BIRT.
We created over 40 customer reports when we first stood up the program to meet some of the regional and business line needs. After rolling those out, we also have the ability for the end users to create their own reports that are more basic using what we call query based reports from any specific application. They can log in and create reports based on the fields and the tables for the application that they're looking at, whether it's work orders assets or their PM schedule.
We've also installed in the latest version of Maximo and our lower environments right now. And it's in the beta stages cognates which is an even stronger reporting engine that GSA will have the capability to report out against. Looking at the screen here here's a couple examples of some of the data that Mike and his team looks at on a monthly basis. The first one is the customer facing dashboard or Maximo will send survey invitations to tenants who request work and when that work order is completed. The reports shown are customized from the plug-in to the Tableau database.
On the second report on the screen is an internal work survey report. This survey invitation is sent with any work order completion notice through email to the tenant and work requester. And this is automatically sent through our maximal workflow, which we've designed to automate this feature. And then on any given month we get over 1,000 survey responses. And as you can see here there's a high satisfaction score for the contractor's work.
So that was reports and data. Next, I want to talk about the work center which Mike alluded to and we're Maximo is going and what it was before from a mobile perspective. So in the last seven years of product development, we went by with their mobile device called Maximo anywhere. What IBM is doing from a product perspective, it say they are redesigning the work centers to make it more user friendly. So when the users log in to their device will have the ability to select from an image and one click to log at their service request.
Some features of the self-service request work centers shown here, are the ability to classify ticket templates and use pre-defined issues that categorize records. There is the ability to avoid duplicates by suggesting and following similar service requests. Using the search function right there you can look through the history of any service requests that have come in. You also can utilize your mobile devices camera, take a picture, attach it to the record for the end user to see. And then finally looking to the future of work centers on there will be capabilities to scan bar or QR codes to auto populate location information.
And we've piloted a number of groups in the past year. And we're hoping to have much broader use of the work centered capabilities. Moving forward with GSA and meeting with different groups to satisfy their business needs. OK. So from work centers, we're going to close out here on the innovation front from two programs were in the pilot stage of rolling out this year. The first is Maximo Asset health and insights otherwise known as MAHI and geographic information systems, GIS.
MAHI can improve the visibility for a health of assets and locations to minimize downtime by preventing failures. MAHI looks at conditions, costs, performance, and remaining useful life to help automate an optimized asset lifecycle decisions. It really gives the functionality to the users to calculate and visualize a health score as you can see here on acid health based on a number of customizable KPIs that the business can set to see, how healthy their assets are.
In addition, the ICF team is also deploying GIS in our current release. Users will have the ability moving forward to tag data points at a specific location on their mobile device and with MAHI and GIS as a whole. We further strengthen GSA's ability to track building information using predictive maintenance. And we look forward to champion the product, as GSA's data maturity level continues to evolve.
So stepping back and looking at the smarter buildings industry as a whole. The Smart Build for those of you who are unaware, this really consists of warehouses, factories, office buildings, and other enterprises. Intelligence structures leverage a variety of two dependent technologies such as AI, broadband, wireless, cloud computing, and some of you are familiar with the internet of things which is a series of networks and systems to improve operational efficiency and enable a safer and more productive environment.
And a key statistic to note on this slide is smart building automation software and systems will be over $20 billion in value in North America by 2027, which isn't too far away. So we do see the value of a lot of these connected devices here on GSA. We're proud of the SkySpark integration with some of the data points that we get pushed to there. And I know GSA has a large initiative to move forward with the IoT programs and really give thought to tracking data in a smarter way. And Mike and Fernando could speak more to GSA's Data-to-Decisions initiative, which this program falls in line with.
OK. I think that wraps our prepared deck. We could do a time check. It looks like we've got plenty of time for questions from the group. So I am new to this GoToWebinar. Let me see if I can pull any questions that have come in. Bear with me one second.
OK. The first question I'm seeing here-- what fueled the decision to switch from a waterfall project management approach to an agile? That is a good question. So Mike and Fernando can chime in here but my first thoughts were we got a relationship requirements traceability matrix that had-- I think 863 requirements in total. And to attack that we originally had a waterfall system where we got line by line sign off and making sure we were compliant meeting all the business lines as a piece.
And everything that we needed to get a COTS product customized to GSA's satisfaction, That was an extremely time-consuming process, but rewarding at once. Once we got to more steady state operations and went live, we had the ability to work with Mike and Fernando and all stakeholders on all of our teams to really prioritize what they want to see next in Maximo.
So they have a process that goes through the change control board to approve any changes that they want to see in the system, and enhancements, and an evolution of the product. Our team can be the champion and introduce new features and then it's really on GSA and tier 2 and the core team to decide what they want to see next. So in a federal space working in an agile development environment is a little bit more challenging as we have a number of security protocols and red tape that we need to get to in terms of Windows for testing in our lower environments getting the core team to run through all our scripts and make sure every single change is working and signed off on before we push directly to a production environment.
So I hope that answer the question. Mike, Fernando do you have anything to add on are evolution and project management approach from a development cycle standpoint
Fernando Castelli: Yeah, I get something real quick or just to add to that those very well said that this Fernando, It shows that the approach to agile, it's something that it's not that old but it is being widely used because it just allows the teams to deliver more value. In other words, the waterfall approach you kind of do one thing, and then you take it to the next step, and you take it to the next step. In an agile, you're allowed to work on the sprints. So you kind of involve multiple teams at the same time, plus it gives the user acceptance folks to jump in earlier into the process, versus the waterfall's is later. So you are able to actually adjust a lot faster and make changes quickly. But that's basically why we tend to try to do everything around that job methodology.
David Marcus : Well said, Fernando.
Fernando Castelli: I hope that answers-- yeah, I hope that answers the question a bit.
David Marcus : Mike, did you have anything to add on this one. OK. I will move to the next question here. What were some key takeaways or challenges driving change for the users at GSA? Let me think. So from a good life perspective, thinking back to five years ago when we first introduced this program, of course with any new technology there comes with a necessary training element to it.
Our team did have a technical trainer and that train the trainer approach which worked very well for GSA. We had the ability to go in and work with the super users and subject matter experts in each of the 11 regions to really help them understand the product as we rolled it out to each specific region. So as Mike said of the top of the call with 11 disparate regions. They all were doing facilities management and their own way and unifying it into one system while still giving these stakeholders making sure they were voices heard that when they log in the system they're going to see what matters.
Most of them when they first get there and also create customer reports to meet their business needs. So the challenge was from a training perspective getting these users up to speed and onboard. And then from that point, getting mobile user adoption has been a slow and steady process, where we've got a mobile app that is functioning, but not as many users are doing that on a day to day basis, because of the familiarity with the desktop more so than the mobile app. I think those were my biggest challenges that I saw for the GSA user base. Fernando or Mike, any thoughts on this?
Fernando Castelli: I don't know if Mike is still on line or we lost him. But-- Yeah, I cannot accept on that or agree with without David. I think that was pretty good, the train the training involving creating a group of estimates across all the regions. The communication, plan that we use, to let everybody know that this is coming when what we're doing. I think that helped out not necessarily a challenge but that helped out in creating a low risk system through the end users.
David Marcus: Thank you, Fernando. So I'm seeing a question in the chat box that says, services like Google and those, GSA is a whole runs on the Google Suite. I'm not familiar, specifically, with Anthos. I don't know if the lines are opened up and we can have a dialogue on where that's pointing to but I'll have to circle back with you on that question.
OK and I only see one other question here. How is COVID 19 affected GSA and CMMS operations. So our team actually is in a second five year contract of this program. And in that new contract, we've switched to the licensing model to tokens so we can get better value out of each individual user recession by looking at key data. And through the new token licensing, we have the ability to report out on users in the system on a day by day or hour by hour basis.
And we did see our user base go down roughly 20% to 25% in the first two months or so after COVID was taking effect in the US. But we're pretty much back to pre-COVID levels of building management abilities to do their jobs. We're not seeing as many users in the buildings themselves and the mobile device metrics will reflect that but as a whole we're pretty much fortunate that we're steady state with COVID.
OK. I am not seeing any other questions coming in.
Speaker: Hey David. I see a question for you. Where? Am I just lost it. OK. How many of the devices are providing data for analysis?
David Marcus: How many of the devices are providing data for analysis?
Speaker: Are devices in the buildings?
David Marcus: Let me think of how to best answer that. So GSA issues are number of mobile devices. We've got everything from the Android suite to the IoT suite GSA has a predefined list of applicable devices that are compliant with NCMMS and after time though, they'll change that as the technology increases. But those government devices are issued to the end users to perform their job abilities. They have to follow the same level of security that our desktop application would in terms of making sure they are through our secure off program to protect the integrity of our data and security.
Did that answer the question or was there something specific about.
Speaker: There was some more clarification asking about not computer devices but each back heaters, chillers things like that.
David Marcus: Yeah. So we track all that data. We can run reports against the applications. And we have specific reports set up to track different asset types. So we can break that out by region. We also have security in the system that will only let certain known contractors view their specific data. So in terms of reporting against different asset types, different work orders, different PMs. There is a multitude of ways we can show that data and report out against it.
Fernando Castelli: This is Fernando. I see something about-- and I don't know if it's a question. But is NCMMS being used to track COVID cases in GSA-managed buildings and the associated cleaning stats? So I don't know if you-- he touched a little bit on that. But I think if that's a question from somebody out there, you know, we did implemented a code-- we were required to implement a code in NCMMS to track any cases or anything that's related to COVID and the cleaning, and everything like that. So yes, the answer to that is yes. We did implement that a few months ago and you have it in NCMMS.
David Marcus:: Thanks Fernando. And I'm getting a hang of the webinar chat and seeing more questions flowing in. So thank you all for providing these. Are any IoT sensors feeding information directly into the application? So we alluded to our sky spark integration, I think Mike and I both hit on that of those devices having a one way integration pushing to our Maximo database for any values that are outside the specified range depending on the asset type. So we do take that sensor data and we definitely see that as an area to grow for GSA as the data and maturity level comes up to speed and we can track against those in Maximo.
I see one about the structure of the teams that helped build this. I'm extremely fortunate that I've had the ability to be surrounded by extremely capable Maximo practitioners. We've got a lot of people who've been working with this software for many, many years. We've had a team that has mostly been there from the ground up to current day. So not only the ICF team but working with Mike and Fernando and their core team and the relationships built there have enabled trust and innovation and having hard discussions about where both teams want to go I think is easier when you've got years of experience working with one another.
Fernando Castelli: I see a question about what is the key contributor of the cost savings of $2 million. I don't know if you want to talk a little bit about this certainly say from an IP perspective and from an O&M perspective, I mean, just licenses cost-- individual license, of course, were really expensive all over the place. I think centralizing the system and then having a generic or standardized set of guidelines definitely increases productivity, which reduces. Cost so that's my two cents on that. Maybe David or Mike can actually add a little bit more to that.
David Marcus: That was pretty spot on, Fernando. That number came from the pure licensing cost of having the eleven different regions all operating on a different system to manage their buildings' data. And we also managed to save additional money by switching from an enterprise to token licensing model in the recompute about 18 months ago. So IBM and the product team have changed their licensing options and using and monitoring through share tokens, drives, more value, and cost savings. And really lets us customize to the applications that we use and only have the government pay for specific applications. OK. I think that's it for the questions I've seen coming in.
Michael Hoffman: And David, this is Mike. I've got a mask on, sorry. But if there's any question that I missed that you want me to circle back on, just let me know. I'm back in. Thank you.
David Marcus: Oh. Thank you, Mike. I think we're ready to wrap up, unless you have any final thoughts about the program as a whole that you wanted to share with our audience today.
Michael Hoffman: The only thing I would add is we naively thought that within the first year or two, we'd have it pretty much built, and it'd be rolling. But what we found is that it's definitely been a constant evolution. And as David mentioned earlier, the flexibility and agility of how ICF is developing and configuring us to support has really evolved nicely over the arc of the program. So that's all I'd add. We continue to adapt and respond, and we make all the decisions down to individual requirements and cleansed, it's all based on business value. We work back and forth very closely from our users to David and the developers, and prioritize what will be released and when in each release in timeboxing. And this has been very, very agile and very nice for us. That's all I'd add. So thank you. It's a pleasure working with David and crew on this program.
David Marcus: Thank you, Mike. I couldn't think of a better way to close this session out. I appreciate everybody attending. Hopefully, we provided some good information during your lunch break and if you would like to share the presentation today with any of your colleagues that might have interest or working in tangential type projects where we could potentially have synergies from.
Feel free to reach out a note. The dicey content team behind this will be sending a follow up and hopefully, have this recording available for everybody to review and share with those who might have some industry need or knowledge for the program that we've implemented here at GSA.
OK. I appreciate everybody joining today and thank you for your time. Look forward to having conversations with each of you as you see fit. So thank you again.
Who we are
David: Good afternoon. My name is David Marcus, and I will be joined here today by my GSA counterparts, Fernando and Mike. We'll be going over our story for GSA of how we revolutionized their asset management system and produced annual costs for the program.
So, I am the program manager for ICF. I've been on this program since the start, and lead the team as the architect of GSA's asset management system. I will hand it off to Mike and Fernando to introduce themselves.
Michael: Thank you, David. As David mentioned, I'm a program manager with the US General Services Administration. I work for the Office of Facilities Management, so I am the business line program manager for the National Computerized Maintenance Management System (NCMMS). Fernando, do you want to introduce yourself?
Fernando: Yes. Thank you, Mike. This is Fernando Castelli. I am the IT project manager for NCMMS. I work with the GSA office of the chief information officer as a liaison on any IT-related issues that are required for NCMMS.
David: Thank you, Mike and Fernando. So today we're going over what the NCMMS is, and how we've standardized, optimized, and built up the architecture for the program. We'll be going through some of the solutions and impacts that we've had on the program. And, as this is a webinar, we will be requesting that you hold off on questions for a Q&A session at the end, but feel free to type in the questions in the chat box on your screen as questions arise, which we will get to the end of the session.
This is a recording, so feel free to share this with your peers. If you want to reference this again, we'll be sending the information out after the session. If there's anything else that piques your interest, and you want to hear more from our team, feel free to reach out, and we will see if we can set that up in a future session or answer any questions that you might have.
What is the NCMMS?
David: So, what is the National Computerized Maintenance Management System (NCMMS)? I am going to hand it over to Mike and Fernando, who predated my time here before we went live with NCMMS, to tell you about how GSA was doing facilities management before this program went underway.
Michael: Thank you, David. This is Mike, again. Before we came into the National CMMS, what we had was a sort of a patchwork of different systems. On the diagram, you can see there's a number of GSA's 11 regions that are colored in blue. At the top, you'll see Region 6, and you'll see Regions 10 and 9 on the West Coast. All of these regions that you see with the blue boxes had their own region-provided computerized maintenance management system (CMMS) system. So, you see, in one case, we had a system called Produce Marketing Association (PMA), and in a couple of cases we had Maximo.
In another case, we even had a custom in-house solution. We had a patchwork sort of on the west side of the map. And on the east side, with the exception of Region 11, which is the Washington, DC area, we had a number of contract provided systems, which meant that the mechanical teams who were contracted to maintain our buildings came in with their own system, which was anything from a CMMS to a spreadsheet or something else.
And so, in that net legacy situation, if we wanted data about how things are going in a given building, we would put out a big data call and get a bunch of spreadsheets back and try to merge it. In that old system, we didn't have consistency, for example, in how you name things, so it's hard to look across the portfolio and see what was going on.
And one of the really large downsides, especially in these regions where the contractors provided systems, those contracts typically lasted five years. The contractor would come in, load data either from scratch or perhaps from a previous contractor, and then when they’d leave, you'd lose a lot of that richness of the data and the equipment histories. That was a problem for us.
Again, we're trying to run a unified portfolio across our 50 states and 11 regions. That was really the motivation to get all of this data in one place, get it consistent, get the processes consistent, and move forward the national system.
How we use asset management
Michael: Now on slide 6. This is a quick look at what a CMMS, or a computerized maintenance management system, does for you and what ours does for us specifically. Our system is built on Maximo. That's an IBM product
The core of this system is asset management. In this case, asset refers to major maintainable equipment in your facilities. It could be chillers, it could be boilers, things like that. We manage something north of a million assets across roughly 1,500 buildings.
We do day-to-day work order management. Tens of thousands to hundreds of thousands of work orders flow through every month. So it's a pretty large amount of data we're tracking. We're also integrated –– looking at the upper left of the diagram –– to something called GSA link, which is a building analytic system. It talks to our building automation systems.
Think of it as like a super-powered thermostat. It's running everything in that building, keeping track of humidity and air quality and controlling your fans, your set points, and all that sort of stuff. Well, we talk to that system. If that system finds something is out of range, or perhaps you're letting in too much fresh air for some reason –– maybe you have a stuck damper or something –– the GSA link will analyze that sort of thing and send us a spark or a special work order saying, “Hey, I'm analyzing this. I see a problem in the building. Here's a work order requesting you check it out and close the loop.”
Looking on the left of the diagram, we are used to planning and tracking preventive maintenance. Again, across these we have, like I said, over a million assets. The assets require preventive means. Some are lower than that but it's still hundreds of thousands of assets that we actively maintain and track. Of course, we track breakdown maintenance and customer requests or tenant requests, and we have our rich reporting capability as well as our mobile maintenance capability.
You can think of a CMMS, generally speaking, as the logistics heart for what you're doing to maintain your facilities. And in GSA, the National CMMS is the system of record for tracking all the work done under the service contracts in the building: the mechanics, and the custodial staff, the landscapers. All those folks are service contracts, and we don't necessarily track when they trim some things in the garden, but whether it's elevators or major equipment or spills, all that stuff flows through the NCMMS for tracking and service-level agreement (SLA) reporting and tracking, which we also share with our tenants.
What is our mission?
Michael: I have alluded to this already, but what our mission is, again, is about consistent nationwide CMMS and supporting policies, procedures, and programs around that. And including a lot of work on contracts with hundreds of contracts to make them line up with our policies to use the NCMMS the way we need it used.
We provide complete, consistent, accurate equipment inventories, and that's been quite a journey in itself, and of course, the maintenance histories around that equipment so you can look for trends and problems. Again, consistent, easy to use, efficient for work order processing and equipment inventory tracking. We don't do those data calls like we used to. We can just pull it up in a matter of minutes and get the data that used to take weeks to get.
That really supports our data-driven decisions at all levels. We support rolling up to the whole portfolio, to drilling down to a region, to a service center within a region, to a given building, and on down as deep as you need to go. With the capabilities of this system and our reporting tools, we can slice the data as we need to answer questions, whether it's from our own executives from our building operators or from auditors or congressional people needing information.
We provide that consistent maintenance across PBS to integrate with our system. We have job plans and guides that we've loaded to the system and support working with engineers to update those periodically to get that consistency across our system, both for quality assurance and quality control. And getting that consistency with all these contracts –– and something north of 2 billion a year in contracts –– we're using to maintain and keep these buildings running, it’s all flowing through the CMMS tracking. As I mentioned, this is a single solution for PBS, so it's unified.
How we got here
Michael: Go to slide 8. Thank you. So briefly on our timeline, kicking it off in 2012, but we didn't just jump in and implement a system. We spent a lot of time working with our regional teams’ external subject matter experts. We developed a number of SOPs or operating processes for our 1,500 buildings. Getting that data in, came in everything from a spreadsheet to sometimes PDFs, which, if you work with data, that's a tough way to load thousands of records.
But we got it loaded, and we continue to refine it. A portfolio this size is a never-ending process. I worked early on with a number of our contracting officers across the regions to get our contracts updated, again, to line up processes and requirements around this. So that's 2012 and 2014. Our rollout phase really began in the 2015, 2016 period. We got authority to operate in short order.
David or his predecessor's team, we’re very strategic about that and we managed –– I think Fernando can speak to it later ––but managed to get authority to operate in something under, at least preliminary, under six months. We were up and running, which was quite fast for our organization. We did multiple releases even in the early days, as it was a waterfall approach, but it was broken into multiple releases because there is so much to do to organize that work.
We got our first buildings flowing work through the system in March of 2015 and then our PBS commissioner –– that's the top person of an organization put out an order basically saying, I want work orders flowing in all of our own buildings by the end of September of 2016. So that's our big effort — it was major features being added to releases and configuration. And in 2017 to 2018, we really started enhancing the system. We added tenant surveys.
In 2017, we integrated our building automation system. We upgraded in 2018 to a more recent version of maximizing, and every couple of years I expect we're doing that, and it's a big effort for David and his team. We got our first mobile app users in that time period.
We updated our preventive measures guide, which is the overall guidance to how to maintain our equipment, and it's tightly coupled with how we go into the NCMMS. We keep that guide in the NCMMS and we publish from there now. It's not a separate Word document saved in our database. And if somebody does save a Word copy, we actually pull it out of the database and transform it.
We continue to improve that on inspection — There was a really good effort done by a team in 2018. To improve that, they tightened it up and actually raised the bar for the use of NCMMS as one of our core systems. So in the '19 surprisingly to 2021, at this point, because we're almost to then of our 2020, because of the way our fiscal year works. But we've been increasing our value, we are tracking national key performance indicators, and we're working hard with David and others to improve our usability at some of those work centers, which is just a nicer modern user interface.
We're using something called Toby, which I won't go into, but it's a way to use a standard to get the data. And we're talking tons of thousands of lines of data from each of our big construction projects and following them efficiently into national CMMS. That's a huge improvement that we've worked with our partners in designing construction. We start to crawl through the buildings and try to get this information off the equipment boiler place, and now it's this automated flow. We have customer dashboards, and we'll be doing tenant tickets in 2021 where they can put in service requests.
Let’s talk about data
David: Thank you, Mike. Hopefully, that brings you up to speed on the before picture and some of our current-day operations. I wanted to give the big picture here looking at some data points, and you can see how robust the size of this implementation actually is. The ICF team managed to build a system that can both evolve and grow with the user base implementing GSA's mission. Working with key stakeholders to customize a commercial, off the shelf product, which Mike just took us through, as IBM's Maximo for all the facilities management needs.
With over 20 million square feet of building data, GSA needed a solution that could accommodate an application of this magnitude. We house over 1,500 buildings and store more than 500 million work orders in our system. We have custom-tailored screens for a user base that consists of over 2000 employees and contractors as this program has been live in production, I think for a little over five years now. We continuously work with Mike and Fernando and the GSA team to advance with both the IBM product side from a technology standpoint, and meet all the customer and user needs.
Looking at ICF GSA operations, I'm extremely proud of the work that GSA and my ICF team has done. Mike took us through the history, but we were able to deliver over 800 initial requirements within a span of 14 months to stand up GSA and production. At first, we operated under a waterfall project management methodology, and over the years, we've progressed to a very agile team. We worked with GSA to prioritize their enhancement backlog and roll out iterative sprints at their discretion to our Kanban board.
Our team also manages five environments with one upper and we make sure to follow ICF best delivery practices for software development and also for data migration. As the product owner, our team operates a tier 3 help desk as well, and provides subject matter experts to troubleshoot complex issues. We have customized different workflows and software programs to track change requests and maximum support. Our team also personalized the ServiceNow platform to feed into our JIRA ticketing system. This gives us requirements traceability with GSA and the ability to search and report against the issues that come in. This holistic approach enables us to create a knowledge base of reporting structures and the program continues to grow.
I like to also point out the ICF team has deployed the largest federal maximum cloud project to date. And I want to draw attention to how much data we actually have on the system. As you can see under our delivery best practices on the diagram, we realize the need to migrate our data, and for that, we use our Optum database. We've since moved over half a billion records in the last year to the archive that consists of data that is over 13-months-old. GSA isn't just throwing it in the archive and not looking at it ever again. They have the ability to plug into our database and report on-site data.
Key facts about NCMMS architecture
David: Moving on to the next slide. Here, we're going to go over the NCMMS architecture. I'm going to hand it off to Fernando, to walk us through what it took to set this up and some key facts about the architecture.
Fernando: Great. Thank you, David. This is Fernando, again. These diagrams are not necessarily that pretty to look at. I just gave a brief overview of what we accomplished, at least from my perspective. This program will be its first at Public Broadcasting Service (PBS) to be a solution that was going to be centralized on Amazon Web Services (AWS) on the cloud. That presented some challenges that we were able to successfully complete and take care of in a much shorter time frame than we expected.
We did start with our limited authority to operate, which took about six months and is usually about a year. We were able to accomplish that within six months, and that gave us the capability to start installing and configuring. The CMMS, as an application, gave us an extension of, basically, one year to achieve a full authority to operate by addressing all of the adequate security controls through our system security plan, which is definitely created in accordance with NET and Federal Information Processing Standards (FIPS). With NET's guidelines on FIPS categorization, we were able to –– NCMMS, it's an application that can be accessed anywhere, from any device –– we were able to use our internal secure off for two-factor authentication into our system. We use the current active directory for GSA for all management. Overall, from that perspective, we were able to basically get going in less than 18 months, which is an incredible accomplishment considering this was the first time for all of us.
Michael: Thank you, Fernando. That was a great overview. I'd also like to point out, for those of you who have AWS experience or are partners on the line here, we not only provide hosting solutions to enable us for NCMS, but we provide AWS architects who continuously ensure the solutions engineered to maximize all of security uptime and availability. We leverage multiple AWS availability zones and regions to deliver automated fell over. We also provide backup and restoration capabilities through a number of software programs.
Our innovations over the years
David: Next, we're going to be looking at some of the innovations we've rolled out over the years, and we're currently deploying here at GSA. This next screen takes you to what we call the maximum start center. It's the first screen when somebody logs in from the desktop application. Our team, the ICF team, works with GSA to customize over 20 different start centers to meet the needs of each type of user in the system.
Mike took you through a couple of types of end-users but working with the business side, we really ironed out what each user needs to see to maintain and complete their work in the most efficient manner. The distinct security groups are important for differentiating end-users and the first screen that they're looking at. In the above image specifically, we're displaying our results set from our integration disguise work.
This user has got a number of roles that they have access to from a security perspective. So this O&M contractor is showing the port lifts for, what Mike alluded to, at the top of our call where the integration points to Skyspark. SkySpark pushes every sensor data point to Maximo when their values go out the side of a certain range for us to track. And having the ability to see the most important work for each user when they log in is a key feature that we provide to GSA.
BIRT and reports that help drive decisions
David: Moving on from the heart center. Next, we're going to talk a bit about reports and how data drives our decisions here at GSA. The GSA team worked with each of our 11 regions to track, monitor, and optimize key performance indicators, and maximize a number of reporting capabilities, which I'm not going to cover in its entirety in this format, but the inherent reporting application is Business Intelligence and Reporting Tools (BIRT).
We created over 40 customer reports when we first stood up the program to meet some of the regional and business line needs. After rolling those out, we also have the ability for the end-users to create their own reports that are more basic using what we call, query-based reports from any specific application. They can log in and create reports based on the fields and the tables for the application that they're looking at, whether it's work orders, assets, or their PM schedule.
We've also installed the latest version of Maximo and our lower environments right now. And it's in the beta stages, which is an even stronger reporting engine that GSA will have the capability to report out against. Looking at the screen here, there's a couple of examples of some of the data that Mike and his team looks at on a monthly basis. The first one is the customer-facing dashboard, or Maximo will send survey invitations to tenants who request work and when that work order is completed. The reports shown are customized from the plug-in to the Tableau database.The second report on the screen is an internal work survey report. This survey invitation is sent with any work order completion notice through email to the tenant and work requester is automatically sent through our maximal workflow, which we've designed to automate this feature. And then, in any given month, we get over 1,000 survey responses. As you can see here, there's a high satisfaction score for the contractor's work.
What the future holds
David: So that was reports and data. Next, I want to talk about the work center, which Mike alluded to, and where Maximo is going, and what it was before from a mobile perspective. In the last seven years of product development, we went by with the mobile device called Maximo Anywhere. What IBM is doing from a product perspective is, say they are redesigning the work centers to make it more user-friendly. So when the users log in to their device they will have the ability to select from an image and one-click to log at their service request.
Some features of the self-service request work centers shown here, have the ability to classify ticket templates and use predefined issues that categorize records. There is the ability to avoid duplicates by suggesting and following similar service requests. Using the search function right there, you can look through the history of any service requests that have come in. You also can utilize your mobile devices camera, take a picture, and attach it to the record for the end-user to see. And then finally looking to the future of work centers, there will be capabilities to scan bar or QR codes to auto-populate location information.
We've piloted a number of groups in the past year. And we're hoping to have much broader use of the work centered capabilities forward with GSA and meeting with different groups to satisfy their business needs. So from work centers, we're going to close out here on the innovation front from two programs that were in the pilot stage of rolling out this year. The first is Maximo Asset Health and Insights (MAHI) and geographic information systems (GIS).
MAHI can improve the visibility for the health of assets and locations to minimize downtime by preventing failures. MAHI looks at conditions, costs, performance, and remaining useful life to help automate optimized asset lifecycle decisions. It really gives the functionality to the users to calculate and visualize a health score. As you can see here on acid health based on a number of customizable KPIs, that the business can set to see how healthy their assets are.
In addition, the ICF team is also deploying GIS in our current release. Users will have the ability moving forward to tag data points at a specific location on their mobile device and with MAHI and GIS as a whole. We further strengthen GSA's ability to track building information using predictive maintenance. And we look forward to championing the product as GSA's data maturity level continues to evolve.
So stepping back and looking at the smarter buildings industry as a whole, the Smart Build for those of you who are unaware, this really consists of warehouses, factories, office buildings, and other enterprises. Intelligence structures leverage a variety of two dependent technologies such as AI, broadband wireless, cloud computing, and some of you are familiar with the Internet of Things (loT), which is a series of networks and systems to improve operational efficiency and enable a safer and more productive environment.
A key statistic to note on this slide, is smart building automation software and systems will be over $20 billion in value in North America by 2027, which isn't too far away. So we do see the value of these connected devices here on GSA. We're proud of the SkySpark integration with some of the data points that we get pushed to there. And I know GSA has a large initiative to move forward with the IoT programs and really give thought to tracking data in a smarter way. Mike and Fernando can speak more to GSA's Data-to-Decisions initiative, which this program falls in line with.