Dr. Eddie Amos, GE Digital
An interview with the Chief Technical Officer of GE Digital
The following is a transcript of the audio available via the player above. The audio file is the definitive source.
Ali Tabibian: Welcome, welcome, welcome everyone to this episode of Tech. Cars. Machines. I'm your host, Ali Tabibian and you can read more about me and GTK Partners, the producer of TCM in the episode notes. We are kicking off our first “Machines” episode with the bang and a big bang. [00:00:30] This episode is with Eddie Amos, chief technical officer of GE Digital. General Electric as many of you know is the company that Thomas Edison started. Today, the company is mainly a purveyor of heavy capital equipment such as jet engines and that's why you heard the sound you heard at the opening. GE Digital is the company’s software unit and it's all about embracing capital equipment with modern software smarts. Sounds like “Tech. Cars. Machines.”, doesn't it?
Now, no one has done more and invest more to evangelize, really bring to reality [00:01:00] a world where complex physical machinery is networked with sensors and software, than GE. GE's name for this construct is the Industrial Internet. Generally the industry started getting serious about the industrial Internet about 2015 or so. GE however started publicly focusing on the subject, at the C.E.O. level no less, in 2011. A couple iterations of bringing things under the same roof led to what is now called GE Digital and [00:01:30] that unit is located or headquartered in San Ramon, which is about an hour northeast of San Francisco and they have about 2000 employees.
The investment in GE digital has been huge. About a billion dollars over five years is what was announced in 2011 as the founding investment in this unit and over one and a half billion has been spent on a handful of acquisitions since then, not to mention of course a large amount of time being spent by executives around the company.
GE wasn't just the first entity to blaze this path. [00:02:00] It was way out in front. Of course when you're pioneering into uncharted territory you'll wind up going down some dead ends. In the case of GE over the years, it's adjusted the scope of its offerings, sometimes adding through acquisition, sometimes letting others do what it had originally intended to do itself. It’s also mainly focusing its offerings now on end-markets where GE itself sells equipment into rather than something broader which is where it started.
The reward for being early and dogged is that there's [00:02:30] no one whether other large equipment vendors, major tech firms, or competing startups that has the scope of GE Digital. I've followed the space and attended GE Digital's conferences for four or five years now and I've noticed that the attendees and competitor attitudes have really shifted over that time. From – honestly -- a skeptical curiosity to now an emerging respect.
In Tech. Cars. Machines we've talked a lot about the common issues of sensing,connectivity [00:03:00] and data analysis between the worlds of cars and machines. So I should point out why you really won't hear much about sensors and connectivity in this “Machines” episode. That's because this “Machines” episode is about really big expensive capital equipment. A power turbine probably costs around 25 million. A big jet engine cost anywhere from $10 to $35 million. Now those are retail prices and I never pay retail. Even so, when something is that expensive it makes sense to [00:03:30] sensor it and connect it starting decades ago when doing those things was actually really expensive and difficult to do.
What's different today with the equipment and sensor data is the ambition around what to do with that data. It used to be about reporting machine characteristics to a human generally about what's already happened. These days, there's a torrent of data coming from the sensors going directly to computing equipment that are trying to improve the operating efficiency of those [00:04:00] assets and for making predictions about what's going to happen rather than what's already happened to influence things like maintenance and reliability.
Let me provide a little detail on some acronyms and terms of art that Eddie will use during this episode. Using software to improve operating efficiency of an asset is referred to as “asset performance management” or APM. Now this is different from “BI” or “business intelligence” [00:04:30] and some of these other terms you've heard. That software tends to focus on processing financial or inventory data. The asset in APM is the machine itself. APM is about improving “KPI”s or “key performance indicators” of a process or a piece of equipment. The repository where all this data goes into has historically been called a “historian”. You'll hear Eddie refer to historians.
Crunching all this data usually means using statistical techniques. A key one [00:05:00] for part failure is called the Weibull Analysis, W-E-I-B-U-L-L. I believe that approach was perfected in the 1950s actually but don't quote me on that. You'll hear the term “SME”s or “smees”, that stands for small and medium enterprises or in this case simply mid-sized businesses that sell equipment. The terms and “MI”, “ML” and “AI” which you'll hear about and of course refer to machine intelligence, machine learning and artificial intelligence.
[00:05:30] Without further ado, here’s Eddie Amos.
Voiceover: Tech. Cars. Machines. Subscribe here or at gtkpartners.com.
Ali Tabibian: Great. So we're here today with Dr. Eddie Amos who's the chief technical officer of GE Digital if I got that right. Did I get that right, Eddie?
Eddie Amos: I'm kind of jack of all trades. I'll do whatever they asked me to do. So a little chief technology officer here, a little building of applications there, whatever it takes.
Ali Tabibian: Great, great. So we're here in San Ramon which is the headquarters for GE [00:06:00] Digital. Eddie, I think you're based out in Roanoke, Virginia.
Eddie Amos: Or as my wife would say I really live on the American Airlines, seat probably 28B flying back and forth. I live in Roanoke but I have my time out here.
Ali Tabibian: Nobody on this podcast is going to believe you're flying coach but we'll go with it.
Eddie Amos: You would be surprised the next time you're on American Airlines I'll be the guy at the back.
Ali Tabibian: That's great. So you know what Eddie, give us a little bit of your background. You come to GE [00:06:30] with obviously a very technical background but come through GE Digital through an acquisition. Maybe give us a little bit of that history.
Eddie Amos: Yeah, so that's great. Many years ago I worked for a little company called Lotus Development, most folks probably don't remember Lotus Development, we made a spreadsheet and I actually worked as a field engineer on a product called Lotus Notes back when it was introduced. Worked my way up, I.B.M. acquired us. I was in the product management team at I.B.M. for WebSphere for many, many years. I got tired of travelling believe it or not so I went out and did two [00:07:00] startups. One failed miserably, one was fairly successful. I got recruited by Microsoft. I actually ran Visual Studio and Dot Net for many years for them before retiring and then moved back to Roanoke, Virginia to be a college professor and met this gentleman who started Meridium and he had some really good questions and I was fascinated about his company so next thing you know I was working for him, then GE acquired us.
Ali Tabibian: Great. So that's actually a really interesting [00:07:30] angle to come at explaining what GE digital is and does. When I was recently at Minds + Machines which is the premier conference associate with GE Digital and it's about every year, asset performance management which is essentially what Meridium was really front and center as the offering. Maybe coming at it from that angle you can explain to us what was the original insight for GE Digital and after [00:08:00] a few years of being at it, what is it that the customers really want from GE? Obviously when you're first there's a process of figuring out both at the vendor and the customer level what works. What is working now and it seems like Meridium is a big part of now what's working.
Eddie Amos: So if you sit back and look at the history of asset performance management, there was a term probably coined about 22 years ago by a gentleman named Bonz Hart and Leif Erikson who worked for Gartner at the time. They were explaining this space [00:08:30] which wasn't really EAM and it wasn't really ERP. It was more focused on reliability of the assets. How do you make sure that you're protecting people, the planet, the profits of various companies. So they sat around one night and they said, well, it does this, it does this and they finally came up and settled on the term asset performance management. So it has been an education process as it went through the years. GE is probably the company that made APM cool.
[00:09:00] So if you sit back and think about it there is a lot of interesting things that happen with assets. GE's one of the few companies that actually designs, builds, manufactures, service, operates equipment. And if you think about from the time you design to decommission an asset, a lot of things happen. So you may have designed it one way but then the operational aspects of that may be something totally different. So how do you come up with things like failure codes. How do you understand if something is strange, it's not quite within sync [00:09:30] of how it's supposed to operate. If I have down time what other piece of equipment is it going to affect? Do I have the critical spares where I need them? Can I say with great certainty that if I run this plant at 110% I'm not going to cause harm to the environment or my planet.
So as we sat back and looked at it, we've been doing this for a long time but GE kind of took it to the next level. GE had instrumented many of the turbans they make, the airplane engines. Being able to come back and take that amount of data with the [00:10:00] insights that we brought from APM, it was a match made in heaven. They had a lot of content or we have a lot of content. We have a lot of assets that we were able to bring into the model. So where a lot of companies could come back and perhaps run analytics, that's only one thing you do. So is that basically you're importing data and then you're running a BI tool to come back and find an anomaly. APM is much broader than that. We're coming back not only finding those anomalies, we're telling you why it happened, when it happened, when is it's likely [00:10:30] going to occur again. What other pieces of equipment did it impact, what is your risk exposure and how do you make sure that you're reducing that so that you keep your plant operational.
So it's a whole different mindset. A lot of times, a lot of folks in the industry did it, it goes back to the old Jiffy Lube, preventive maintenance, change your oil every 3000 miles. Well that's good but guess what, most modern cars now can go a year or 10,000 miles with synthetic oils and you never have to change it. But I can promise you next I'm out flying [00:11:00] my favorite Cessna, they're going to ask me to have that airplane checked every 100 hours preventative maintenance which you probably don't need to do. So by instrumenting things, understanding how the equipment operates, you can come back and build in a lot more efficiencies.
Let me give you an example. There's a refinery partner that I worked with that wanted to build a new refinery. But it's very costly. You have to go through environmental concerns, you have to get a lot of regulations passed on your behalf and then you also have to sit back and look at just the pure sheer Cap [00:11:30] Ex of building a refinery. So, the trick is, how do you keep that refinery running longer. Now ,if oil is $100 a barrel you want to run it 7 by 24. But, you don't want to do something that's going to cause problems with the equipment or the various environment things you're trying to do. So by using the APM methodologies we were able actually to extend this plant's refinery production by about 18 months. So what they basically created was a virtual plant. So [00:12:00] they were able to double their output without increasing their Cap Ex, stay within the safety parameters and never miss a beat.
We try to do that with every industry we serve. And because we design, build, manufacture, service, operate equipment, we have those unique insights. We know the failure codes, we know the recommendations. We operate equipment. We know the operating characteristics. We service equipment. We know the servicing history of it. So when you bring all that together it's not like you're updating a file and running analytics against it, you're running real time analytics [00:12:30] against the process to help you build more efficiencies into your operation.
Ali Tabibian: Great. Thank you for that explanation. Meridium when it was independent wasn't associated with any particular manufacturer. It was coming at it from the software angle. What was your experience being a pure software vendor versus now an integrated software and equipment vendor and why was one the right answer in one period of time and now the other is the better answer.
Eddie Amos: So, [00:13:00] at Meridium, we were very heterogeneous in nature. So we worked with any number of historians, any number of equipment manufacturers. We actually created benchmarking software where we would work with customers to bring in heterogeneous data types to understand things in terms of the 18 KPIs that were important to them or let them benchmark against their peers. We were always, anybody in the space, there's always the struggle to have more data. [00:13:30] More data in terms of how things were built, manufactured. How things really work. What is the operational aspects of that data.
So by GE acquiring us, all of a sudden we were like kids in a candy store because we had a plethora of new data that we never had access to. So then it was the ability to take what we had already done from a heterogeneous perspective, add in interesting things that GE had done across the various business units and all the sudden change the mathematical formulas in terms of adding insights that [00:14:00] our customers had never imagined in the past.
So, in one stance, it's kind of nice to be a heterogeneous standalone but you always run into that content problem. If you're then a company like GE you have a wealth of content, then it's more of an exercise of which content do you use because you can very quickly get into data overload if you're not careful. But I think in many ways by GE acquiring us, we've been able to sharpen our game. We've been able to go into markets we've never imagined, we've got better insights into various [00:14:30] pieces of equipment that we never would have figured out on our own because we don't design, build, manufacture and service equipment.
So that's a problem of many of the startup analytic companies out there today. They're only going to be able to give you a point of view. You may have an ERP system with a EAM component, enterprise asset management system that can tell you work order history. Great. Maybe I add in a historian that gives you pressure flow temperature. But that's still very limited data compared to everything that happens in the life cycle of an asset. It's not going to tell [00:15:00] you the failure codes, it's not going to tell you the recommendations. It's not going to tell you what's actually helping from a operational perspective or servicing history.
So when you burn them all together it really, really helps you define the algorithms more precisely. It allows you to leverage things that we're really moving into a very aggressively like an artificial intelligence and machine learning. And because we have all of those data flows it's not like we're going out saying give me your data we're going to build something or bring in 100 data scientists and build something that new. [00:15:30] We're actually building the intelligence right in the applications. So our design center, our Monitoring and Diagnostic Center in Atlanta where we actually manage and monitor a third of the world's power supply, we're adding AI components directly into the software right now so we can go back and look at failure codes. We can go back and look at the history of components. We can come back and make better decisions in near real time without having as many humans in the loop. So it's quite interesting and I don't think we could have done that as a stand-alone entity without GE.
Ali Tabibian: It's interesting. [00:16:00] So if I maybe over simplify it, it sounds like if you're not the equipment manufacturer, you're starting with the data you have or the data you can get and try to hopefully get to a great answer. But if you are the equipment manufacturer you have the most relevant data to start with so the question is wrapping the software skills and the information assessment skills around that.
Eddie Amos: Let me give you another example. I was called out to one of our energy customers in the Midwest two weeks ago. Part of this organization used the old Meridium software which is now [00:16:30] a GE APM and the other partner didn't. I've been trying to get the partner that does not use APM to use it for probably three years now. I was in town so I stopped by and saw them and they wanted to show me their new APM like type of software. So basically they created a data lake which is great and they were running a famous off the shelf software BI tool on top of it and they found the anomaly. They were so proud of that anomaly and I was happy for them they found an anomaly. But they couldn't [00:17:00] tell me once again what caused it. What other pieces of equipment was infected. What they were going to do for their ongoing maintenance operation to make sure that never happened again.
So it was a very, very limited data set. It's like a lot of the startups you hear like in Chicago or here in the valley. It's like, they only come on and say we have a platform but then you have to give us your data. Well, great. That's going to give you a point of view but has it given you all five points of view that we have because we design, build, manufacture, service and operate [00:17:30] equipment. Well the answer is no. You can come up with decent insights perhaps on one stream of data but you're never going to get the optimal solution with just one.
Ali Tabibian: So let me expand on that point a little bit then. As a frequent airline passenger, I'm very thankful that the key component that GE provides the engines are probably some of the most reliable creations of humanity. I'm assuming that [00:18:00] other things that GE does are imbued with the same kind of quality, the turbines for power generation etc. If they're the most reliable objects in that system, how much does GE have to go beyond what it provides and capture a heterogeneous environment for the overall system reliability to be something that it can affect. Eddie Amos: The way that we would look at it, so we go back to an airline who buys our engines and we're focused on outcomes. How [00:18:30] many safe takeoffs and landings that we can have. You sit back and look at the airline industry, it goes back to a lot of MRO practices that have been around for years and years and years and years. And in many ways the GE Aviation group has been the leaders in terms of thinking about this. They have built sensors directly into the engines. They have built the most incredible algorithms that set on, like that predicts platform that can come back and tell you things like sliding strategies. [00:19:00] You've got a particular engine that's coming up for maintenance repair. You happen to be over a certain area. Maybe it's time to bring it in because the parts you need are at this facility at this time. How do you optimize everything that happens from the time that plane is boarded to it takes off?
We're working with several of the airlines right now with our chief digital officer in aviation on how they take our time series components and put it back in to build more efficiencies on top of it. Once we have the data, we have incredible [00:19:30] algorithms is how do we build on them, how do we make them smarter.
Going back to my artificial intelligence MI component earlier, part of our team right now is work with the Aviation Group on coming back to trolling through all of that data that we get off the engines which is a lot and coming up with what are the characteristics, are there things that we haven't seen, are there things that we can improve on and how do we take that full cycle right back into the manufacturing process. It's pretty fascinating.
So same happens with power, same happens with [00:20:00] transportation. But the thing I guess I'm really excited about when you think about like the aviation example you just gave us, since we design, build, manufacture, service and operate equipment, I'm sure you've heard about additive manufacturing. So additive is probably one of the things that just gets me excited every day of my life and my little 3D printer at home. It prints some pretty cool things. But imagine printing components of an airline engine.
Well, if you start thinking about that in terms of everything we do in terms of a digital transformation [00:20:30] within GED, all of a sudden we have equipment on the plant floor. We have automation software, we have manufacturing execution systems. So we're going through that whole design process of building these components. Then we have APM in the middle that's monitoring the reliability of these components telling you that something's likely to break. Then we have filled service software like ServiceMax that's setting theirs so that we can come back out and make sure that we have the right service technician with the right parts at the right location [00:21:00] at the right time.
Now, one of the most costly things that our customers have is there are a lot of spare sitting around. If you've got a critical asset you want to make sure you've got the spares close by in case it breaks because you don't want downtime. So imagine this world now that we have APM saying there's probably a 95% probability that particular fan blade is going to break or malfunction within the next 30 days. Send that over to the MES system, send it to additive, print it out, [00:21:30] report back to the field service and have it sitting there when the technician shows up. Be able to build that proactive predictive maintenance component into the software and being able to reduce those spares is priceless. Aviation is breaking new ground in that arena and we're just leveraging our software to help them out.
Ali Tabibian: That's a really interesting thread that I'd like to pursue for just a second and I want to get back to the AI and ML stuff that you touched on as well. In the [00:22:00] world of industrial IoT, I think the large focus, the large set of discussions out there are about reducing unscheduled downtime. What essentially you seem to be saying and actually a very significant part of the value especially when it's a really expensive component like a jet engine or the turbine is actually doing scheduled maintenance either less or more intelligently. I think it's an under-reported source of value as far as the popular press at [00:22:30] least is concerned I think.
Eddie Amos: As we build more intelligence into software, I mean, we have incredible geospatial information anyway especially on the newer sensors that are out there. So we probably know within a couple of meters where the equipment is and where the service tech is. So imagine if you have a service contract and you're sending somebody else to work on a plan facility A, B, and C but all the sudden that your APM software says we're getting a couple of components within six meters of you that's kind of moving into [00:23:00] the questionable range. To have him and her check that out right then and there is sort of having to send out a service tech again.
Another great example, two [pomps 00:23:09] come out the lawn one serial number apart. One in Baytown, Texas, one in Minneapolis, St Paul. One, never had a problem with it. One, the seal is failing every six months. It goes back to your MI AI question, what's happening.
In the old days, you'd sit back and send the tech up to replace the seal over and over and over. Using the [00:23:30] software that we're putting together now and some of the MI, AI capabilities, we can start injecting other data sources into that and start having the data scientists do more what ifs. So what's the difference between Baytown, Texas which is down to Houston and Minneapolis St. Paul? One has a lot of solidity in the air. So being able to pop in everything from atmospheric data to geospatial data to see if it's an incline, you can come back saying, ah yeah, the salinity air and the air is breaking down those seals at a higher rate [00:24:00] than normally everywhere else. So what can you do with that? How many other pumps do I have in like type of locations around the world so when I send the service tech out I can make sure that I have the part so that they can repair it so it never goes down.
Oh by the way, let's go back to R&D and figure out why that seal was failing anyway, upgrade the material so that we were place it so that we don't have to ever deal with it again. It's that whole closed loop process. So being able to predict it, being able to come back and make sure you got the right people at the right location with [00:24:30] the right parts on time makes happy customers because we want to make sure they're running 7 by 24.
Ali Tabibian: How is GE equipment today different or being designed differently because of what GE Digital has been doing the last three or four years? Has it been long enough for that feedback to affect the design of the equipment?
Eddie Amos: I think when you sit back and you look at the insights that we're giving and what our global research group's doing that the insights have increased. We are being able to come back and identify possible failure [00:25:00] points quicker. We're also probably finding more optimization points faster than anything else. So in a lot of cases as we bring the the information together, the big thing about IoT is you ingest, you store, you analyze and you act on the data. It kind of comes back because we're unique and we're collecting a lot of data we can bring it back in and it basically you learn upon the what you know. A Weibull analysis is a Weibull analysis is a Weibull analysis. But imagine running other components along with that Weibull analysis [00:25:30] to come back and have a better insights.
We have customers that come back and want to combine things to make it new and interesting. In the petroleum industry, there's something called APA 58581. One is a quant methodology, one is a qual methodology. If you're a mathematician you're probably going to choose one of the other and be happy with it. But we have customers that say, no, what I really want is a quasi formula. Can you give me the ability to take two data sources, add them together and run new algorithms on it from an AI perspective. [00:26:00] Well sure you can. But you sit back and say what if, what is possible. And that's when it becomes interesting. And then that's when it helps you go back and build better products, whether you're in R&D, whether you're in the GRC researching it or whether you're folks like me who is in applied science that's making it work every day.
Ali Tabibian: Interesting. As a one time engineer myself when you see assets that are extraordinary reliable and affect life safety, you know they're heavily over-engineered. So this data may actually be something that would make the production of those [00:26:30] equipment essentially more efficient overall. That you really need 12000 sensors in a jet engine is kind of, eventually, if nothing gets ever reported something of value, you could probably do without.
Eddie Amos: I share the story quite often to the reliability component. I'm a private pilot. When I was living in the Pacific Northwest I used to have these different flight paths I would go out to look at and one was over a particular oil refinery up in the Puget Sound and one day I flew over it and part of it was gone. They had [00:27:00] a catastrophic event which people lost their lives. It was kind of wow, how does that happen. It kind of comes back to identifying those critical assets and understanding those life-ing algorithms and understanding what you can do that really got me interested in the space in the first time.
There are things called integrity operating windows. If you run this facility at 100% you can expect X amount of life. Person B may run it at 110% but forget to do the right logging capabilities [00:27:30] or the software doesn't know about it, next person runs it at 120%, you get a catastrophic event. If you come back and say with the what ifs and the software we have now, we're actually recording all that. So we can come back and say before you press the button and say, running at 110% is probably going to tell you no, not a good thing to do because you've basically affected the life-ing of that piece of material or that piece of equipment, don't do it.
But you're right. As an engineer, being able to come back, put that in the feedback loop, or you engineered [00:28:00] it too heavy, engineered it too light or engineered it just right. Great insights we can provide.
Ali Tabibian: Interesting. When you say artificial intelligence, AI, what does that represent to you?
Eddie Amos: So there's two pieces. There's the machine learning algorithms that we come back and we run against datasets to come back and have interesting insights on things. So probably the classic example is you've got a lot of textual information coming in out of an EAM system. [00:28:30] You may have long and short text that you can take that information and come back and measure it against another data source and say, ah, yeah, that's a rotating piece of equipment, that's a bearing problem and help identify what it is.
So we have components like that that run all the time. We have components that go out and look at things like cases and alerts that are happening to come back and pick up certain patterns. Artificial intelligence is not like the robot but it's how you kind of put intelligence in the software. [00:29:00] So I can come back on those algorithms and actually control or adjust things automatically for you within reason. I wouldn't roundtrip probably something to a control room floor a nuclear power plant, not a good thing to do for security purposes. But can we come back and give the software and give you the recommendations of how the optimal setting should be? Can I come back in the software and give you the ability to come back and say if you do this flight path at this time at this altitude with this amount of wind resistance you [00:29:30] can increase your fuel efficiency by X, Y or Z.
So how do you build that into it so that it's not the human basically asking the question, it's the software looking for the answer alerting the humans of the possible outcomes. And then you can come back and decide how you want to act upon that data. Do you want to intervene or do you want the software to make adjustments?
Ali Tabibian: Interesting. In that process and that process of tuning system so it comes up with an interesting input to the human, how much of it is if you will the genius [00:30:00] of the mathematics and the algorithms of the person responsible for those and how much of it does it really need essentially a tight integration with a person on the front line with a domain specialist for that insight to be valuable?
Eddie Amos: The answer to that question can vary greatly. In my world, content is king or queen. So you sit back and look at the various data sources that you have, being able to combine them in a logical manner to produce the outcomes is [00:30:30] priceless. If you have that then you take a domain or a subject matter expert and they can work with you on the parts in terms of yeah, that feels like a rotating piece of equipment and that feels like a bearing failure and this is what we know. And we can come back and basically extrapolate a lot of that out on any given day but it's the nuances that the SMEs come back in. Oh yeah, it operates under these conditions and it's hot in South Carolina on August day, these [00:31:00] characteristics have been noticed over time.
Now, you can come back and say I can record those over time with the sensors but in many ways it's going to be those SMEs that who've worked with it, who actually understand it that can come back and provide insights that maybe we didn't think from the mathematics equation or we didn't think from the data sources. Don't forget, there's a lot of things that we just can't do yet. So let's go back on a pipeline inspection. You can run a pipeline pig down a strand. You can pull out tons of ILA [00:31:30] data, terabytes of it. We can come back and mathematically calculate what the corrosion is going to look like inside of that pipe. That's inside of the pipe. You're still going to have to inspect the outside too because you could have a dent, you could have a bolt coming out. You could have other things.
So that's when the two worlds kind of come together. I don't think there's perfect math here. There's SMEs out there that have a lot of domain expertise that if we could capture it better in software we could probably make a lot of the algorithms even smarter. So to me that's just another source of content.
Ali Tabibian: [00:32:00] Interesting. This is really a complement, the system is really a compliment to the user. For the most part at least in the near term, human judgment isn't really being replaced, it's being augmented.
Eddie Amos: Even when we build a lot of the algorithms out, could I come back in and say take a, something like we call cognitive analytics. I could take a cognitive analytic and go against a huge dataset, extrapolate a lot of things out and go back [00:32:30] to my bearing failure. I could probably tell you within a 80 to 90 percent range that yes it's a bearing failure. Nine times out of 10 I'm going to want a human, an engineer to come back, review it, make sure the recommendation is right and press go before we do that.
Same way on a control panel. We can come back and tell you that a failure has occurred or an alert's been tripped. Can we reset them? Absolutely. But some cases you may not want to reset them. You want [00:33:00] the human to be in the loop to come back and look at other factors before you do it. Can the software do it? Absolutely. It's that fine line of what you want the software to do compared to what you want the human to verify before you act upon it.
Ali Tabibian: I know the answer is probably different depending on which part of GE we're talking about. Who is the user the system is being designed for? Is it the factory worker who wants to know when to replace that belt or factory engineer, design engineer, management, all the way to financial analyst? Who's [00:33:30] really being targeted.
Eddie Amos: I'll use one of Kramer CNBC, from the shop floor to the top floor, that audience varies. So, there's going to be folks that's down on the shop floor who's building it. They'll want to know more things like inventory turns. They want to know more about is my supply chain optimized. Do I have my two pieces of machinery talking to each other on the MES lime with a gigabit ethernet switch. T
he characteristics they're looking for is probably going to be much different from those of a reliability [00:34:00] or operation maintenance engineer. They're going to be looking at the operational components of it, after it's been manufactured, how it actually operates, how reliable is it. What are the strategies I need to put in place. Compared to a field service person, it's going to be looking at how do I optimize my routes, how do I make sure I have the right spares at the right location at the right time. Up to the sea level, how do I aggregate that data? Is this supplier better than that supplier? Can I reduce my inventory turns by such and such? So [00:34:30] it depends on which part of the design chain you're on, who's going to benefit from the equipment.
We have dashboards set up in the software today that goes everywhere from a person who is on the factory floor looking at the raw materials coming through the chain to build something all the way to an executive saying, okay, if I don't increase this component on the factory floor I'm going to have to shut down the line which wouldn't be good. I have a lot of interest today in the [00:35:00] reliability components. Downtime costs money. A lot of the sea levels are looking at the data now saying what are we doing. Are we spending the right amount of money, it's like insurance. I get two knobs I can turn. One is going to be financial, one is going to be risk. Can I accept more risk? Am I willing to do this? So it gives you regardless of where you're at, on the shop floor or top floor or anywhere between, it gives you the ability to analyze and make decisions like you've never been able to do before.
Now it goes back to you know a lot of [00:35:30] peers and start ups in the industry. I don't really think they understand the domain to the point that they should because it comes back to anybody can take a set of data put it in a CSV file, put it up and run your favorite machine learning algorithms on it. That's not where the real beauty comes in on a digital transformation. It's how are you using the data to basically transform your business. That's going to require a lot of sensors, that's going to require a lot of algorithms, that's going to require a lot of imports from many data sources so that you can come [00:36:00] back and make good decisions.
One of the biggest things we probably get hit with every day is yeah, that's great if you've got a greenfield operation. Everything's brand new, everything's got sensors on it. But guess what, the world's not like that. There's a lot of brownfield operations. How do we go back and work with those folks? Do we have the right vibration monitors on there? Do we have the right electrical current pulses out there? There's many ways to move customers across that continuum and give them insights. It's just where you start and what value are you looking for.
Ali Tabibian: [00:36:30] So it's interesting, what you're describing is a very broad and comprehensive solution that GE Digital has to offer. If I recall correctly, in the beginning the solution was maybe not as broad but also kind of taller. By necessity when you're sort of first delivering a solution you almost have to deliver a reference design, that includes everything from a sort of a AWS type of Amazon Web Services type of equipment all the way to the top. Where is that architecture today and [00:37:00] how do you see it developing in the next few years?
Eddie Amos: When Bill Ruh started the group, he had the foresight to step back and say, you know, a lot of the problems I'm seeing in the industry are common problems that are repeatable. And then you start looking across the software assets we had in the company and found that GE as a company had some pretty incredible assets that were basically standalone silo products. So you may have heard of components [00:37:30] like smart signal. Smart signal is very widely used in the industry today. It gives you very interesting insights. We learn every day and do new blueprints of what we call the daily catches of insights that we can share with customers. Well that was one solution.
Then you sit back and think about APM. APM was solving a whole different set of problems. It was looking at the reliability based maintenance. It was how do you bring more life out of it. So that was another silo. Then you look at things like [00:38:00] manufacturing execution systems MES. We had two of those. One for one set of manufacturing, one for another. Then we had automation systems, SCADAs, historians all of those components are tied together. So when you sit back and you look at it, it was very, very vertically focused. But when you started thinking about what was the commonality of them, it all revolves around something called an asset. When you step back and look at a company like SAP, they built everything they had off of a financial model [00:38:30] and they extended out.
When Tom Siebel and Mark Benioff, one created Seibel, one created Salesforce.com, it was based on a customer model. Everything about the sales cycle, that customer, everything that you wanted to know. As we thought about it, it was really about the asset from design to decommission. Then we started looking at that portfolio that was very vertically siloed and how do we bring it together. So that's the quest that we've been on. We thought about digital transformation. How do we get your workers so they're actually using this [00:39:00] material to take advantage of the change that's happening in the industry. How do you come back and then integrate these software products together so that we can give you a view from beginning to end.
So when you look at GE today it's taken the best practices of the past 20 years software products, it's brought them back into a phase of what we look as digital transformation. How do we gain really good insights across the entire value chain and then how do we make a seamless solution. And guess what, different customers will be at different parts of that value chain and not [00:39:30] make them all take the whole solution at one time but let them start slow and grow fast. But what we find is over time instead of buying a bunch of point solutions, people are buying the suite because they want to get the end to end in view. They want to be able to have a view that's going to allow them to see what happens from the time it leaves the shop floor to where actually servicing out in the field.
So it's gone from a very vertical siloed approach to a very horizontal approach. But even with a horizontal approach we had to build the software in a manner that [00:40:00] allows for maximum extensibility because everybody wants to do things just a tad different. We wanted to make sure that we didn't put folks in the old ERP Syndrome where you had to come back and build a bunch of one offs that could never be upgraded. So the extensibility that we offer, we like to think more about configuration than customization. How do you take the bits, How do you add them, how do you drop algorithms down in the runtime. How can you leverage our asset model. How do you build new applications off the components that we have and solve the world's problem.
[00:40:30] So to answer your question, vertical to horizontal with great vertical extensibility built off of it. So, the business units here will take our product and do things with it that's up and beyond what we intended. That's okay. As a horizontal platform, we'll understand 70 to 80 percent of a particular problem domain and then back to your SMEs earlier, that's where they can really come in and do their magic, build something that's going to be unique for aviation or unique for power or renewables.
Ali Tabibian: I'm imagining your ability [00:41:00] to partner with providers of other technical solution extends to solutions that are vertical across equipment at the lower parts of the stack. Layers that manage a security or upgrading of the firmware.
Eddie Amos: So, the way that we kind of view the stack in general is that we want it to be very plug and play. So if folks want to come in and plug in a different set of [00:41:30] algorithms or security protocols from the operation network to the control network up to the business network, yeah, we should enable that. We should be able to work with them in terms of how they display information on various devices, how they hook in various third party devices within the APM software. GE owns a historian. There's a lot of historians out in the world. There's OSI Soft, there's Honeywell, there's, [00:42:00] it's like 12 of them we plug into. There's many of them out there. You have to be extensible enough to say come one come all to make this thing work.
So, we won't be beat at beginning or end of any conversation. We'll know probably more about GE equipment than anybody on the planet as we should. We will know a lot about third party equipment because we operate third party facilities and we have a lot of data, metadata that we've collected. Customers always own their data. But then been able to plug in other sources customers [00:42:30] get great value of it. So I haven't seen one system that someone hasn't asked us to plug into that we couldn't because of the extensibility of the product. I'll say that and I'll have one on my desk this afternoon but that's the gist. We want to make it open, we want to make it extensible, we want to work with data types that's not necessarily GE. We want to own solutions that are not necessarily GE.
In the next several months, I'm going to be publishing on what I call the white space map. Here's areas that are interesting in this space that [00:43:00] I have no desire to go out and build software on, but other folks can go out and build a very interesting software practice on it. We've got one around spares optimization right now that I'll give you rudimentary capabilities off the product but this was a wide open space for one of our partners to build something great on it in which they have. I've got about 12 of these areas that I want to put out there.
So, partners are important, we want to give them the ability to be a part of the ecosystem, grow the ecosystems out. Other equipment vendors want to plug in. Our asset [00:43:30] model is very open. We want them to plug into it. Other analytic vendors want to play what we do. We have a data layer, an abstraction layer across ours so they can take advantage of that. So, we're trying to keep it open, we're trying to keep it horizontal, we're trying to solve big problems.
Ali Tabibian: Outstanding. Eddie, you've been very generous with your time. Let me end it by saying would you like to point to anything currently or in the future which you're really excited about in terms of what change is coming up that your GE customers [00:44:00] are really going to go, wow, you know what, this is the pot of gold at the end of the rainbow, this year's rainbow.
Eddie Amos: So, I'll give you a couple things that we're working on right now. I can't give you exact dates because that kind of gets into things we would be promising you that we shouldn't. What are we working on in the labs, that's kind of interesting. Things that we're working on in labs right now we've got a new set of edge technologies that's quite remarkable that we've been working on for quite a while and we've [00:44:30] just finished a tour with the analysts which they've compared what we've been doing to what's in the industry right now and they say by far we're ahead of the pack. Now this is going to give us better ways to ingest data, run analytics on the edge, it's the next generation of things that we've had in the market but with a lot more functionality.
We've been working on a new set of tooling. Last year at Minds & Machines you may have noticed we talked about something called Predix Studio. We've taken that a little bit to the next level. We've brought it, embedded into the product which [00:45:00] is quite exciting. I saw the the early beta of that which we'll be putting out in the near future. We've got another set of tooling for people who want to extend our product out. SIs, ISVs accordingly, they can do this in a better more easily drag and drop manner without having to get kneedeep in code. It gives you the ability to code if you want. New plugins that we're doing in terms of sensor type of components. Lots of new content providers that we're working on.
We've talked about Predix Private Cloud. [00:45:30] A big reference customer coming up on that second half of the year that we're quite excited to announce. So, if you liked Minds & Machines last year, you're going to love it this year because what I'm telling the dev group and I'm telling our customers we're going to be in Missouri, the Show Me State this year. So the things we're going to share on the floor are going to be real, they're going to be live. It's not going to be a PowerPoint like a lot of our competitors do. We encourage everyone to come to Minds & Machines and touch it and play with it and give us feedback on how we can make it better.
Ali Tabibian: I think this year, it's [00:46:00] in San Francisco but it's moving back to one of the piers, Halloween weekend.
Eddie Amos: So I'm going to dress up like a middle aged software guy. Wear khakis and a blue shirt.
Ali Tabibian: I'll let you borrow one of my blue blazers. Anything else Eddie? You've been wonderful.
Eddie Amos: If you want to learn more about what we're doing, to check out the digital side on GE. Come to Minds & Machines, it's a great event. You'll get to meet the product managers, the developers, the executives. Challenge us on what we're doing. You'll start seeing more and more events that we're [00:46:30] going to be doing in the future out around customer listening tours worldwide. So, if you see us in town please stop by, we'd love to meet with folks and challenge you and challenge us on what we're doing. Let us show you what we're doing. At the end of the day we're here to protect people, the planet and profits of our customers.
Ali Tabibian: Thank you so much, it's been wonderful.
Eddie Amos: Thank you.
Voiceover: Tech. Cars. Machines. Subscribe here or a gtkpartners.com.