skip to Main Content

Interview with Maarten Stol from BrainCreators

UIIN – Lehigh University Iacocca Scholars Tori Campbell and Jessica Osgoodby have recently had the opportunity to interview Maarten Stol of BrainCreators in their scenic Prinsengracht office. The principal scientific advisor shared with our team how they translate AI research into successful business solutions, and what the future holds for AI applications.

Maarten Stol, Principal Scientific Advisor, BrainCreators

JO: Can you tell us a little bit about BrainCreators, and the nature of your work?

MS: We are an AI consultancy company, with the difference that we can also develop complete solutions for our customers if they cannot do so themselves. Often, our customers are companies that need to make a transition from only having a classical IT department to also having a data science department. The way we see it is, in the 70’s, 80’s, 90’s, companies got their own IT department and now artificial intelligence and machine learning is going to add another layer of functionality on top of classical IT.

TC: Can you elaborate on how you collaborate with universities?

We are mostly in contact with researchers from the University of Amsterdam (UvA), the Vrije Universiteit (VU), and every now and then with other universities. Our founders are alumni of the UvA and VU, both located in Amsterdam. This has the benefit that we are still in contact with some of our graduation supervisors and can ask for academic advice when we need it. Our team visits scientific conferences at least twice a year, and many other scientifically oriented meetups. Every year we have at least three Master students doing their graduation work on our team, and usually about 10 other students working on smaller projects of about one month.

TC: How do you address the needs of different businesses that approach to you?

MS: There are different kinds of customers. There are companies that have already done some experimentation with machine learning. They just need to get the right knowledge, the right tools, etc. We educate their teams, or sometimes have their team members do an internship here in the company. They bring their data, their problem, and their definitions with them, and we walk them through the whole procedure of making the right choices, selecting the right tools, developing the right software so that when they are finished with this program, they can go back to the mother company and bring their knowledge with them. This knowledge transfer is at the core of what we do.

It is no longer the case that we only make software, and then sell a license to this software – that is how startups worked ten years ago. You would have one good idea, implement the software yourself, license the software out, and that is what your business model would be. We have turned it around so that the product is owned by the customer; they now own most of the IP. An important part of what we do is to help them get the right knowledge when they depend on experts to use this technology. When we leave, what we take with us is the experience. The lessons we learned in their vertical we can take and apply to the next project.

JO: How do you help customers who have little experience with AI technology?

MS: Sometimes we are talking to a customer that does not have a team of experts [in machine learning], and sometimes they do not want to set up a team. In that case, our developers can make the complete solution as enterprise level software and [the customer] does not have to invest energy into learning the techniques. They just receive the solution. There is a whole spectrum between these two extremes, from having a highly technical and involved customer that just wants knowledge or an extra pair of hands, to a customer that does not or cannot afford to invest time and effort – they just want a working model.

TC: How can data be used in conjunction with machine learning?

MS: When you have enough data of your processes, for example, diagnostic behavior of your experts or long-term data from your sensors, you can use this data to train a model performing like the expert or predicting your sensor values.  That knowledge is implicit in the original data. [Using data] is more attractive these days because more and more is kept and collected: camera data, radar data, sensor data, financial data, user behavior data, etc. In this data, there is a lot of implicit knowledge to answer questions like: how efficient is your team, how well are your machines maintained, are you spending too much on maintenance or too little, do the machines break, is this costing you more than the maintenance? Machine learning technology is making this knowledge more explicit, trying to use it to achieve a type of intelligence that can advise you on your business processes, whatever they may be.

JO: Should we be worried about AI’s capabilities in the future? And do you talk about the ethics behind artificial intelligence in the office?

MS: We do, and we are in contact with ethics advisors, academics at different universities who give us advice and input. Not to give an ethical stamp of approval to our projects, but to provide an awareness of the important questions surrounding ethical issues.

About future worries, it surprises me how concerns in the media can sometimes be different from concerns of people on our team. Take the example of autonomous killer robots, a frightening image indeed. But for now, I don’t think we are there yet at all. Right now, even the self-driving car is still out of reach, and although we all like to say and hear that it will arrive soon, I am not so sure and expect a solution will require a real technological breakthrough.

However, there are some real concerns about technology and autonomy that worry me. I am afraid we still have not developed a good notion of responsibility for AI systems. However, I think we should be careful not to focus only on responsibility for physical things, like the killer robot or the self-driving car. Instead, we may need a more general notion that also includes any information processing entity that lives on the web.

It is easy to point at a drone when it makes a mistake. It is less straightforward if the mistake is made by an algorithm, or a whole class of algorithms, making important choices for you. Algorithms that decide whether you get a mortgage, a parole, or the right medical treatment. Currently, such algorithms are still limited in their autonomy, but this is changing. And since algorithms are invisible, and people only fear what they can see, the danger is that we may not be prepared when large scale autonomous algorithms start making choices with undesired consequences. Who exactly is responsible here? Answers range from the developers, to the shareholders, to the government.

Moreover, even if we had a sufficient answer to such questions today, it is by no means certain that answer will still be relevant in 10 years’ time. Because this is such a gradual process, it can be hard to say when a new answer is needed.

Image credit: BrainCreators

Back To Top