Loading
0:00
0:00
Loading
0:00
0:00
Artifical Intelligence guru Yann LeCun: "Brainport should merge AI with its Hardware and Health Strength"
AI guru and Head of Facebook AI Yann LeCun says Brainport’s AI opportunity is in smartly merging AI with its strength in ICs, Medical and IoT end nodes.

Facebook increasingly influences how we view the world. We see the opinions of our friends responding to news that may or may not be fake. But which messages we see first and which messages remain hidden is all controlled by algorithms. Algorithms based on Artificial Intelligence. And these algorithms are the work of a large research team led by Professor Yann LeCun.

LeCun, thereby is one of the leading technologists behind one of the largest companies in the world. Facebook’s Mark Zuckerburg recognized that his ideas could really boost Facebook, when LeCun was building a formidable reputation in the academic world with his convolutional neural networks, and when already almost all handwritten checks are processed by his algorithms, but when the link between social media and AI was still unexplored. Meanwhile, social media controlled by AI are influencing our lives every day. 

Philips, Signify and the TU/e awarded Yann LeCun the prestigious Holst Memorial Medial 2018. Radio4Brainport’s Jean-Paul Linnartz spoke to him on his visit to Eindhoven.


On the importance of providing an environment for creative research, such as that which Dr Gilles Holst created at NatLab

I started my career at Bell Labs, which was very much modelled on this idea that you do research in an open way, scientist-driven, thus bottom up, with a lot of freedom to work on whatever topic seems relevant or interesting. And this is one of the things that I have tried to reproduce to some extent at Facebook AI Research (FAIR), to maximise the creativity and the way to go forward. Not just to advance technology, but to advance science, which I think it is necessary for the domain of AI. 

Facebook wouldn’t work without Artificial Intelligence

It actually is almost exactly five years ago that it was announced, on Dec 9, 2013, Facebook announced that I was joining. What happened was that, over the preceding months, Mark Zuckerberg and the leadership at FB had identified that AI was going to be the key technology for the next decade, and so they decided to invest in that. And that turned out to be true. FB is entirely constructed around deep learning nowadays. If you take deep learning out of FB, it doesn’t work anymore.

AI cannot just replace all technology that Philips, ASML, NXP or Signify developed, but it can contribute to it. 

AI has significant implications for healthcare – and will save lives

Probably one of the most exciting applications and developments these days is computer vision, which is the application of convolutional neural nets in particular, which I had something to do with  [read: where he is the prime scientific pioneer, Ed.], to medical imaging ….is one of the hottest topics in radiology these days. I find that incredibly exciting… I am not working on this myself…there is a project at FB in collaboration with NYU where I am a professor, and there are various projects by my colleagues. I find that really exciting. 

One example of a idea is that by using deep learning-based reconstruction, we could accelerate the collection of data from an MRI machine, which means the test would be cheaper, simpler faster, which means people can have more of it, essentially. And so the analysis can be done automatically. So can have fast turnaround for diagnosis. Medical imaging I think is one of the biggest applications, and is going to save lives.

On the view that machines learn, but that humans don’t learn from computers

It is not entirely true that we don’t learn from machines. For example, people have gotten better at playing chess and Go because they have played against machines, and with machines. If the machine is better than you at a particular task, you get better at it because you use it to educate yourself. Generally, what is most powerful is the combination of a machine and a person – an expert in the field.

So, machines are there to complement and to empower us, but not to replace us. I am not one of those people who believes that radiologists are going to be replaced by an AI system. It is not the case. There are going to be just as many radiologists, except that their jobs are going to change. Instead of having to spend eight hours a day in a dark room looking at a slices of MRIs, they might be able to actually talk to patients, so spend more time on complicated cases.

Preparing for a career in AI is not studying the math of neural nets in isolation!

In AI in fact, you have to study more math than you would otherwise have to if you do regular computer science. Regular computer science, at least in North America, but it is partly true also for Europe, does not have a huge requirement for mathematics, and most of it is for discrete mathematics.

But if you work on machine learning and AI and neural nets and deep learning and computer vision and robotics, that requires actually a lot more continuous math. The kind of math that we used to study forty years ago in the engineering programme. Interestingly some of the methods that are useful to analyse what happens in a deep-learning system, many come from statistical physics, for example. What I tell young students who want to get into AI, if you are ambitious, take as many math courses as you can. Take multi-variate calculus, and partial differential equations, and things like that. And study physics of course, quantum mechanics, statistical physics.

AI combined domain knowledge, physical devices and hardware is a great opportunity for Brainport 

There are lots of opportunities in new kinds of hardware. Of course, NXP is right in that business. I think over the next 5-10 years we are going to see neural-net accelerator chips popping up in just about everything we buy. Everything that has electronics in it will have a neural-net accelerator chip. Within a couple of years, it will be the vase for mobile phones, cameras, vacuum cleaners, every toy. Every widget with electronics in it, if you want, will have a neural net chip in it. So, there are a lot of opportunities for that kind of industry.

Signify can place AI in the edge rather in the cloud. We are going to see a motion from cloud to the peripheries to mobile devices and eventually to Internet of Things devices. 

China is the place to be in AI?

China is interesting because it is investing massively in that. Interesting thing in China is that the public itself is very interested in AI. China is one of the two countries where I am recognized on the streets. Not in the US. Only in China and France. In France because I am French, but in China because there is so much interest in AI, that it is everywhere, absolutely everywhere. The Chinese have an advantage, in that they have a very large home market. And a disadvantage, in that they have completely isolated ecosystem in terms of online services. That is going to make it difficult for them to export their services.

Facial recognition technology: Benign uses and nefarious uses

YL: Well, it is! In fact, that is one of the things that made FB interested in deep learning in the first place. In the spring of 2013 a small group of engineers at FB started experimenting with convolutional nets for image recognition and for face recognition and they were getting really, really good results. Within a few months they beat all the records, published a really nice paper at the CDPR, the big computer vision conference in 2014, that was called Deep Face. That was deployed very quickly for a use case like: “You post a picture and your friends are in the picture and they get tagged automatically, and they can choose to tag themselves or not.” At first it was not turned on in Europe, but now turned on in Europe on a voluntary basis. Unfortunately, it has been deployed, a very similar technology, using convolutional nets, which is kind of my invention, very widely in China on a grand scale, and it is used to spy on people, essentially. So, there are nefarious uses of technology that thankfully in many countries, the democratic institutions protect us against, but it is not the case everywhere.

There is a very big difference between China, Europe and the US. The US and Europe are getting closer together. Facebook is now applying GDPR-like rules in the US as well. Those are good rules.

Europe does not need its own Facebook in to developed AI knowledge.

Actually, no, that is not necessary. The reason it is not necessary is that: There are several parts to developing AI:  One part is developing new methods, algorithms, science. Making the field go forward. For this you don’t need a FB or Google. You need funding for research, you need a good infra for universities, large computational infrastructure that is accessible to researchers, you need industry support. But there could be that in Europe.

Myth: You need big data for AI

There is this myth that somehow you cannot develop new AI techniques if you don’t have access to enormous amounts of data, like Facebook, Google and Microsoft do. It is not the case. At FAIR, for example, we almost exclusively use public data, because we want to be able to compare algorithms to other people’s. So, we don’t use internal data. Once we have something that works, of course, we work with engineering groups, and they try it on internal data. But to actually make research go forward, you don’t need access to data that companies that FB have access to. 

You need the drive from the applications, of course, to be able to motivate enough people to work on this. What makes FAIR possible is that FB is a large company, is well-established in this market, and has enough profits or cash to finance long-term research. 

It used to be the case for Philips. Holst’s creation was a forward-looking, fundamental lab. I had friends working there 20 years ago. Not the case anymore. Bell Labs is the same. It used to be a leading light, it is a shadow of its former self. It is true of a lot of industry research labs across the world, particularly in Europe. Today in Europe, if you want to find an advanced research lab in information technology in industry, there just aren’t many that practice open research on a grand scale.

My advice to Brainport-based companies seeking advice on AI technology? Get ambitious and go big.

YL: Well, it is up to companies like Philips or NXP or others, that are sufficiently forward looking and have enough resources to really get into this to create ambitious research labs. If you are not ambitious enough about the goals of a research lab, it is going to be second-rate. And if you want to be ambitious about it, it has to be open. That means the culture is very different. If you are a company that builds widgets, you tend to be very secretive about your research and development.

It is the case for Apple, for instance. Apple is nowhere to be seen on the research circuit for AI. They develop the technology around AI, but they don’t really push the science of AI forward, because they build widgets and they have secretive culture

The companies that move the field forward are the ones that are not secretive and not too possessive about intellectual property. And that puts them in a good position to hire, to innovate, to propose tools that other people use, so it makes it easier to make progress.  Practice open research. That is my recommendation.

Open source is essential for faster innovation. Facebook basically doesn’t believe in patents.

There is no need for protection. What makes the value of a technology is how fast you can bring it to market.

For a company you have a choice between working with universities, which is relatively cheap. And then trying to get new innovations from them by either hiring students or by having interns or by having research contracts with universities.

It creates a relatively slow process with a lot of friction to do technology transfer. The main issue with technology transfer is not whether you have the best technology, it’s whether you believe this good technology is something that you can do something with.

The situation we find ourselves in sometimes is that we think we have the best system for, say, classifying text, translating language, recognizing speech. We open source it, and we of course talk to the engineering group at the same time. And the engineering groups, you know, they are doing their thing, they don’t have a lot of bandwidth….. They have to reallocate their resources. To pick up on new technology and make progress. So, they have to believe that what you bring to them is really, very useful. And what we do is, we put it in open source and we can point to it and say, look it has 5000 stars on GetHub and it is used by 200 companies, except us. Isn’t that embarrassing. Things like this happens where you are convincing product groups and engineering groups that your technology is good is the main obstacle to technology transfer.

If you have an in-house research group, even if you practice open research, even if you open source everything, you will get there first. And that is the only thing that matters. You don’t need to protect it. FB basically doesn’t believe in patents.


Listen to radio4brainport.org for more

Many thanks to Erika van der Merwe