Pages

30 November 2019

Machine Programming: What Lies Ahead?


Imagine software that creates its own software. That is what machine programming is all about. Like other fields of artificial intelligence, machine programming has been around since the 1950s, but it is now at an inflection point.

Machine programming potentially can redefine many industries, including software development, autonomous vehicles or financial services, according to Justin Gottschlich, head of machine programming research at Intel Labs. This newly formed research group at Intel focuses on the promise of machine programming, which is a fusion of machine learning, formal methods, programming languages, compilers and computer systems.

In a conversation with Knowledge@Wharton during a visit to Penn, Gottschlich discusses why he believes the historical way of programming is flawed, what is driving the growth of machine programming, the impact it can have and other related issues. He was a keynote speaker at the PRECISE Industry Day 2019 organized by the PRECISE Center at Penn Engineering. 

Following is an edited transcript of the conversation.


Knowledge@Wharton: Given the buzz around AI, a lot of people are familiar with machine learning. However, most of us don’t have a clue about what “machine programming” means. Could you explain the difference between the two?

Justin Gottschlich: At the highest level, machine learning can be considered a subset of artificial intelligence. There are many different types of machine learning techniques. One of the most prominent at present is called “deep neural networks.” This has contributed a lot towards the tremendous progress that we’re seeing over the last decade. Machine programming is about automating the development and maintenance of software. You can think of machine learning being a subset of machine programming. But in addition to using machine learning techniques, which are approximate types of solutions, in machine programming we also use other things like formal program synthesis techniques that provide mathematical guarantees to ensure precise software behavior. You can kind of think of these two points as a spectrum. You have approximate solutions at one end and precise solutions at the other end and in between there’s a fusion of several different ways that you can combine these. Every one of these things is part of the bigger landscape of machine programming.

Knowledge@Wharton: So machine programming is when you create software that can create more software?

Gottschlich: Right.

Knowledge@Wharton: How would that happen? Could you give an example?

Gottschlich: The core idea of machine programming is creating software that creates its own software. We recently built a system using genetic algorithms that allows you to take certain input/output examples and then by running these through a number of iterations — we call them “evolutions” in the genetic algorithm space — it automatically synthesizes a program that will match the input and output. You do this in the training phase. It then takes new input/output examples that it has never seen before and generates new types of programs.

“The core idea of machine programming is creating software that creates its own software.”

Knowledge@Wharton: Which industries do you think are likely to be affected most by machine programming and over what period of time?

Gottschlich: At the highest level, one could imagine that any of the industries that are predominantly based in software are going to benefit tremendously from this. There was a survey that was done earlier this year (in 2019) that showed that we have around half-a-million computer scientist positions that are open. These are programming positions that we need to fill. But we’re only producing some 10% of academically trained programmers to fill those roles. What we have in the software industry is essentially a supply bottleneck. If we can automate some of the simple tasks — reading a file, parsing data, software development testing — it will tremendously accelerate the rate at which software is being developed. So that’s probably the first area that is likely to be impacted by machine programming.

The other area that I think is going to be impacted tremendously is autonomous systems. A core ingredient of these systems is software. For example, a large part of what’s holding us back from getting to level 4 or level 5 autonomy — which is the point where the car can essentially handle all of the nuanced behaviors of driving — a big bottleneck is implementation and algorithms of the machine learning systems. If we can automatically construct those, these autonomous systems will also accelerate in their advancement.

Knowledge@Wharton: Speaking of the auto industry, what kind of impact do you think machine programming will have on the drive towards autonomy?

Gottschlich: As I was mentioning earlier, we recently built a system that’s using genetic algorithm to automatically construct programs. One of the pieces of genetic algorithm is a “fitness function.” You can think of it as a way to grade the accuracy of the programs or the results that the genetic algorithm gives. So the genetic algorithm produces a result, and the fitness function says, “You get a B.” Or, “You get an A.” Historically, fitness functions have been written by humans — people who are experts at machine learning. But often we find that the complexity of the problem you’re trying to solve is directly related to the complexity of the fitness function. So why would you write the fitness function? Why not just solve the problem yourself? We looked at this and we figured out a way using machine learning to automatically create the fitness function without human involvement.

Going back to your question, one of the things that is holding us back in the autonomous vehicle space is the advancement of machine learning systems. Historically, the advancements that we’ve had with machine learning systems have been through humans creating them. But if we use machine programming, like we have with the genetic algorithm solution, the machine can invent its own machine learning systems that will then accelerate the progress of these autonomous systems.

Knowledge@Wharton: What are the implications of this? One of the things holding back autonomous systems or autonomous vehicles is that the system might not be able to make a certain decision fast enough and therefore might end up hitting someone or something. So we probably need software that can predict what’s about to happen before it actually happens. Is that one of the issues?

“At the highest level, one could imagine that any of the industries that are predominantly based in software are going to benefit tremendously from this.”

Gottschlich: Absolutely. We had a NeurIPS paper in 2018 — NeurIPS is one of the leading research conferences in machine learning — that tried to address this problem. What you’re describing here is the space of anomaly detection. In the autonomous vehicle space, when we think about these various behaviors, we think, “This is an anomaly.” In particular, that’s a time series anomaly. For example, you’re trying to prevent a vehicle from colliding with another vehicle or hitting a pedestrian. It’s too late to detect it if the event has already happened. In order to address this problem, we recreated the mathematical foundation for anomaly detections, specifically for time series. We hope that the community will adopt the new mathematical foundation we’ve created and apply it for time series anomaly detectors.

Knowledge@Wharton: And machine programming helps with all this?

Gottschlich: Absolutely. In the context of autonomous vehicles you could use this mathematical foundation to better predict these anomalies, but when you think about machine programming or programming in general, many of the problems we’re seeing with software is around correctness bugs, security bugs, privacy violations. All of these in some sense are time series in nature. A program is a sequence of instructions, one after another. So if you take that mathematical foundation, you can also apply it in the space of machine programming, which is exactly what we’re doing.

Knowledge@Wharton: Machine programming, like a lot of other areas of AI, has been around since the 1950s. Why is there this sudden interest in machine programming now? Why is it picking up in such a big way? And why is Intel interested in investing in it?

Gottschlich: If we look at why it’s taking off today, I would say principally there are two reasons. The first is that we’re at an inflection point. The second reason is my colleagues and I at Intel Labs and at MIT have made an important observation about how to think about the future of machine programming.

As far as the inflection point goes, my view is that there are three things that have created this. The first is that we have made tremendous advances in algorithms in machine learning and in formal methods. Things that didn’t exist, say, 12 months ago, are now fundamental to the advancement of machine programming.

The second is we have tremendous advances in computing today. As the recent Turing Award winners, Dave Patterson and John Hennessy point out, we’re living in what they call “The Golden Age of Computing.” They call them “domain specific architectures.” For a long time, it was just the CPU (central processing unit), but now, based on the advances in machine learning and other areas, we have accelerators that are specific to these domains, and so it’s creating a tremendous opportunity for acceleration of these machine learning and formal methods that wasn’t possible earlier.

The third piece is the abundance of big and dense data. For example, there is a repository called GitHub where people store their software. Back in 2008, I think it had roughly 33,000 repositories. In 2019, when I looked at it earlier this summer, I think this number was over 200 million. This is tremendous growth. It’s nearly a four-order-of-magnitude growth in a decade. Data drives a lot of these machine learning systems. So this has essentially created a vehicle in which we can start to explore this space.

Coming to the observation that my colleagues and I at Intel Labs and MIT have made — fundamentally we feel that the way we’ve historically done programming is flawed. There’s essentially a blurring of the programmer’s intention with these algorithms and with these system level details. What we want to do as we move forward is we want the programmer to just specify his or her intention. So if you want to create a program that will tell you where the nearest Starbucks is, you just say, “Computer, create a program that will always notify me when I’m near a Starbucks.” The computer then handles the details of the algorithms to implement. It understands how to translate that to work on the hardware that’s on your cell phone or in a data center. So these are the two pieces that we think are creating the opportunity for tremendous growth in machine programming.

“Fundamentally we feel that the way we’ve historically done programming is flawed.”

Intel, obviously, is very interested in advances in hardware. I have been at Intel for about a decade now and one change that I’ve seen which is exciting is that earlier Intel used to be just a CPU company, but now the heterogeneous hardware landscape that we have at Intel is enormous. We have neural network processors. We have neuromorphic processors. We have GPUs (graphics processing unit). We have a variety of accelerators. We have FPGAs (field programmable gate array), and we have a ton of CPUs. The problem, though, is programming these things. We can have all this tremendous hardware, but how can we possibly expect the average developer to program them? This is why machine programming is essential to Intel. Intel understands that with this new heterogeneous hardware landscape that is required to advance all of the technology that we’re seeing, we need a way that is simple enough for the average programmer to harness this massive amount of heterogeneous computing.

Knowledge@Wharton: You have written a paper called The Three Pillars of Machine Programming. Could you share some insights from that?

Gottschlich: In 2017, a few of us from Intel Labs teamed up with several people at MIT and we came up with this vision of: “What if we did ‘machine programming’? What would the landscape look like?” The main reason for this was that while people were starting to explore machine programming, they were disorganized. There wasn’t any structure to the thinking.

The Three Pillars of Machine Programming is essentially a roadmap on how we want to express and explore the research space. The three pillars are intention, invention and adaptation.

The intention pillar is what the programmer would be doing. I don’t call these people “programmers.” I call them “software creators.” Our blue-sky vision is that these folks won’t write a single line of code. They will express their intention through natural language, gestures and visual diagrams — whatever is best for them. The invention pillar will take the programmer’s or the software creator’s intention and translate that into software. These are the algorithms, the data structures and so on. Once that is established, the work will get handed off to the adaptation pillar. The adaptation pillar will take that code and figure out, “Okay, what does the software and hardware ecosystem look like for this particular program? How do we need to augment it to make it run efficiently, securely, correctly — and in the machine learning context — accurately?”

Knowledge@Wharton: In addition to Intel, some other companies are also working on machine programming. Are there any firms with whom you collaborate whose work you could talk about?

Gottschlich: We have many collaborators in industry as well as academia. Our industry partners include Microsoft and Facebook. At Microsoft, Sumit Gulwani, who is considered by many people as one of the founders of formal program synthesis, has developed the system inside Excel that will automatically figure out what the user’s intent is. They call this FlashFill. This is a concrete example, real-world evidence, that machine programming is not just a research toy; you can build this into real products. Microsoft is deeply interested in this.

“…While we think of the space of machine programming as being a very long journey, there are things that we can do today in industry that could be extremely valuable.”

Facebook is doing tremendous work in this space. They recently published a paper about a system called Aroma. This works along the same lines as the three pillars. It’s principally focused on trying to help with the intention. Say a programmer has an intention to write a code but doesn’t know how exactly to write that code. The Aroma system will take a little bit of that code and do an analysis over a very large database. It will then ask the user, “Is this what you meant?” It’s sort of a human-in-the-loop machine learning approximate solution. That’s good early evidence that while we think of the space of machine programming as being a very long journey, there are things we can do today in industry that could be extremely valuable.

Knowledge@Wharton: Which countries do you think are making progress that you find impressive in the area of machine programming? In AI, China is advancing in leaps and bounds. Could you talk about what’s happening in other parts of the world, and what are some of the things that you are paying attention to?

Gottschlich: China is doing tremendous things. It has very strong governmental infrastructural support for AI. It’s my belief that the U.S. also has this, but maybe not to the level that China does. As a country we probably need to be more aggressive and progressive about this. There is a lot of involvement and advances that are happening in Europe. They have very strong machine learning leaders in academia and also a vision through their governmental infrastructure.

Knowledge@Wharton: Which European countries do you think are doing the most interesting work?

Gottschlich: Germany is doing some tremendous stuff. Part of that has to do with the fact that they’ve been deeply involved in vehicles. The natural evolution is autonomous vehicles and the by-product of that is deep engagement in AI and machine learning. Another strong presence in Europe is Switzerland, specifically what’s coming out of ETH-Zurich. Not only are they producing outstanding AI and ML results, they are exploring important ideas in the machine programming space.

Knowledge@Wharton: Which innovations in machine programming do you think are most promising? And where do you think the next breakthroughs will occur in the immediate future?

Gottschlich: There are a lot of low-hanging fruits where we can make advances and build things like Aroma or FlashFill that are very useful. But there are some core challenges — at least with the folks that I’m interacting with at places like Stanford, MIT, Google and within Intel Labs — that we don’t quite have the answer to. The first is the structural representation of intention. Oftentimes when we’re writing code, the programmer’s intention is diffused across the code. We want to be able to understand how to properly represent the user’s intention. We don’t quite have this. There are a lot of advances that we’ve made historically with things like compilers and static analysis tools that create different sorts of graphical or tree structures. But when we’ve tried to apply these in the space of machine programming, they don’t quite fit. We can push the square peg into the round hole, but it’s not the right match.

“One of the promises of machine programming — and we’re seeing early evidence of this — is that the code we can generate through these automated methods will be superhuman in performance, correctness and security.”

At Intel, we’re thinking about what we call “the abstract semantic graph.” The idea is that this structure — whatever this is that we don’t quite understand — will be some sort of graphical representation of the semantics — essentially the intention — of what the user wants. I believe that once we figure out how to build this abstract semantic graph, the field of machine programming will see a tremendous spike in growth. A lot of people are working on this. I’m working with collaborators both in industry and academia — folks at Penn, Berkeley, MIT. We are all thinking deeply about this. Hopefully we’ll be able to figure out this abstract semantic graph soon. Until we do that, we’ll work with the not-so-perfect solutions and try to edge our way forward.

Knowledge@Wharton: If you figure it out, what might some of the implications be?

Gottschlich: The programs that we’ll be able to generate are likely to be orders of magnitude more complex than the ones that we can create today. For example, in the space of formal program synthesis or approximate solutions for machine programming, we may be restricted to, let’s say, programs that are up to a hundred instructions or less. If we figure out how to build this abstract semantic graph, it’s my belief that we will move from hundreds to thousands — potentially millions — of lines of code. The implications are enormous.

Knowledge@Wharton: Whenever any new technology comes along, especially AI or machine learning, or as you’ve described, machine programming, very often technologists have to justify these investments to the CFO or to the CEO in terms of ROI or how it fits with the business strategy. What are some of the metrics that you think about in terms of measuring the ROI of machine programming?

Gottschlich: As leader of the Machine Programming Research Group at Intel, it’s my job to not only work on the research, but also justify its business value. Intel is very interested in performance. But we’re not just interested in hardware performance. We’re also interested in software performance. A programmer writing code that’s slow might blame the Intel processor as being slow, even though the problem is not the processor; it’s actually the software. One of the promises of machine programming — and we’re seeing early evidence of this — is that the code we can generate through these automated methods will be superhuman in performance, correctness, security and so on.

One concrete example of this is a system called Halide that some of my colleagues have built. They include Andrew Adams, Jonathan Ragan-Kelley, Kayvon Fatahalian — these are folks from MIT, Stanford and Facebook. (Adams moved from Facebook to Adobe Research in June 2019.) Halide is a programming language that separates out the programmer’s intention from the scheduling of that intention. In their paper that was published in June this year, they’ve shown for the first time that the world’s foremost experts in this programming language can’t compete with the machine. The machine is producing code that is regularly more efficient — and I’m just going to guess here — by at least 50%. It might be upwards of 100% faster. This is the first time in the decade that they’ve been working in Halide that they’ve been able to achieve this. It gives us promise that if we can do it in Halide, maybe we can generalize this and start to improve the efficiency of code everywhere. This is important to Intel because we want everyone’s software to run as efficiently as possible. We don’t want people to mistakenly believe that our hardware is slow, when the problem is somewhere else.

Knowledge@Wharton: As AI systems get implemented, the impact on jobs could be considerable. For example, there is a fear that truck drivers could lose their jobs if autonomous vehicles start hauling goods across highways. Do you think there is a risk that if machine programming takes off, the same thing could happen to software programmer jobs? Is this something that should concern the computer industry?

“Through machine programming, we will create many jobs. Perhaps millions or hundreds of millions of jobs.”

Gottschlich: This is a question that I’m asked quite often. My honest opinion is that the inverse will happen. Through machine programming, we will create many jobs, perhaps millions or hundreds of millions of jobs. The reasoning is simple. We have a global population in the billions, yet the programmer pool is a very small percentage. I think that it’s roughly around 1% of the global population. With machine programming, what we are trying to achieve is to enable the entire global population to create software.

For example, my mother is an incredible entrepreneur. She has created several businesses and has done fantastically well, but she’s not a programmer. The entire world of software is closed to her. I see someone like her who’s wildly creative, she has some amazing ideas, but because software is closed off those ideas don’t get realized. Hopefully with machine programming, with this intentionality that we were discussing earlier, this will create tens and hundreds of millions of jobs. It will also keep the programmers that we have today employed because there is work to be done on building these very complex systems. As we expand intentionality, we’re going to require those people — we call them “Ninjas” at Intel — to ensure that all the subsystems that are part of those three pillars are advancing appropriately.

Knowledge@Wharton: Andy Grove, the former CEO of Intel, once said that for every metric, there should be another paired metric that addresses the adverse consequences of the first. As you think about some of the metrics that you would use to measure the success or the ROI of machine programming, what might be some of the adverse consequences of machine programming? What metrics would you use to ensure that things don’t veer out of control?

Gottschlich: I’m a huge fan of Andy Grove. It’s wonderful to be at a company with such a strong legacy of leadership. Going back to your question about the adverse consequences — we talk about this in our three pillars paper. It is part of the reason why we wrote this paper.

For example, one of my colleagues, Alvin Cheung, who is a professor at Berkeley, is doing this work called “verified lifting.” This essentially uses formal program synthesis techniques to lift code from one programming language and drop it down into another programming language. This is highly useful for legacy systems that can’t be maintained because we don’t have a programmer supply. We can lift that code out and put it in a new language where we have lots of programmers. However, one of the things we noticed is that there’s a potential byproduct of that lifting that can reduce intentionality. His work, we would say, principally falls in the “invention” pillar and then in the “adaptation” pillar. Based on how that code is transformed, the intentionality of that code could be reduced. For example, things like variable names, function names — things that are important to programmers — may not map properly to the new structure. So as we’re making progress in machine programming, we’ve asked the community to think in the context of the three pillars and try to understand if you are inadvertently hurting another pillar. And if you are, clarify that so we understand that this is another thing that we now need to advance.

Knowledge@Wharton: Could you talk about what kind of work you are planning to do here at Penn and at PRECISE (Penn Research in Embedded Computing and Integrated Systems Engineering)?

Gottschlich: A lot of thought leaders in the space of computer science, in formal methods, in machine learning are part of the PRECISE Center. Recently, I accepted an invitation to help chair the Technical Industry Group for PRECISE and act as their Executive Director for Artificial Intelligence. My role with PRECISE and with Penn is two-fold. The first is with PRECISE. They have a very strong technical consortium of industry collaborators. I would like to ensure that all the industrial partners are working in a complementary manner, that we understand what the core challenges are, and that we’re not working in a way that’s overlapping and duplicating effort. That’s one part.

The other part that’s important to me is right now we have a lack of machine programming engineers and researchers. Even though the field has been around since the 1950s, it has struggled to get to the point where it is today. We’re working with Penn and other academic institutes to start to incorporate curriculum changes and get our undergrads and grad students more familiar with it. And then generate the new leading minds through the Ph.D. programs that will drive the research that’s happening both in academia and in the industrial labs.

No comments:

Post a Comment