×

Warning

JUser: :_load: Unable to load user with ID: 42
Keynote

Daniel Hulme, founder and CEO of the London-based, award-winning company Satalia, will deliver the opening keynote at Intergraf Currency+Identity in Lyon on Wednesday 24/03/2021.

A leading expert in Artificial Intelligence (AI) and emerging technologies, Daniel spends a good amount of time educating global leaders and companies on the meaning, impact and practical implications of AI.

What exactly is Artificial Intelligence (AI)? How will it reshape business and how important is it to our future? We caught up with Daniel ahead of his keynote address 'Artificial Intelligence and its impact on society' to find out.

Wednesday 24/03/2021
09:20-09:50
Plenary
Currency and identity:
preparing for the future
Auditorium
Lumiere


Can you tell us what Artificial Intelligence (AI) is and what it isn't? Where it came from and where it is going? And what separates AI from automation and machine learning?

Over the coming few decades we’re going to see massive changes in the way we interact with our environment, and each other.

Much of these changes were once in the realm of science fiction. But Artificial Intelligence (AI) is going to make much of that reality. These changes will raise many ethical questions, and will force us to reassess our social and economic models. Many of the questions that philosophers have been ponificating over the millenia will now have to be practically addressed.

There is a huge misunderstanding about AI in industry. The vast amount of content that people are spreading is not AI. More and more people claim to be an AI expert, but only a small handful truly understand what these technologies are and what they are capable of achieving. It's like claiming you're a surgeon because you know how to knit.

Only a small handful truly understand what these technologies are and what they are capable of achieving.

Companies are hiring data scientists, thinking they will solve their AI problems. This is hugely naive. There has probably been more hype around AI than any other technology I can think of. And whilst I suspect there might be a bubble in the short term, AI will impact our businesses and lives in more than perhaps any other technology.

There are two definitions of AI and the more popular one is the weakest. This first definition concerns machines that can do tasks that were traditionally in the realm of human beings.

Over the past decade, due to advances in technologies like deep learning, we have started to build machines that can do things like recognise objects in images, and understand and respond to natural language. Humans are the most intelligent things we know in the universe. So when we start to see machines do tasks once constrained to the human domain, then we assume that is intelligence.

But I would argue that you can’t benchmark machine intelligence against human intelligence. Humans are good at finding patterns in, at most, four dimensions. And we’re terrible at solving problems that involve more than seven things. Machines can find patterns in thousands of dimensions and can solve problems that involve millions of things.

Even these technologies aren’t AI - they’re just algorithms. They do the same thing over and over again. In fact, my definition of stupidity is doing the same thing over again and expecting a different result.

The best definition of intelligence - artificial or human - that I've found is goal-directed adaptive behaviour.

The true definition of AI involves systems that can learn and adapt themselves without the aid of a human.

I use goal-directed in the sense of trying to achieve an objective, which in business might be to roster your staff more effectively or to allocate marketing spend to sell as much ice cream as possible. It might be whatever goal you’re seeking.

Behaviour is how quickly or frictionless I can move resources to achieve the objective. For example, if my goal is to sell lots of ice cream, how can I allocate my resources to make sure that I’m achieving the objective?

But the key word for me in the definition of goal-directed adaptive behaviour is adaptive. If your computer system is not making a decision and then learning whether that decision was good or bad and adapting its own internal model of the world, I would argue that it’s not true AI.

And it’s OK for companies at the moment to be calling machine learning AI. So for me, the true definition of AI involves systems that can learn and adapt themselves without the aid of a human. Adaptability is synonymous with intelligence.

To summarise, there are broadly two flavours of AI:

    The first is focused on fully automating tasks that were traditionally only in the realm of human capability.

    The second is building complex adaptive systems.

Both flavours are quickly helping organisations become more efficient and effective.


What is the greatest fear when it comes to the potential of AI? Does or will AI have the ability to change societies by eliminating large numbers of jobs and favouring only super intelligent people?


In the short term, over the coming decade, I believe that AI will create jobs. In the long term, it will remove more jobs than it creates.

I spend a lot of time thinking about the concept of economic singularity. This is the point at which AI will free people from their jobs and those people won’t be able to retrain fast enough to get another job, because AI will have taken it, too.

Some experts believe that this could happen in the next 10 to 20 years, and that governments and our economy aren’t prepared for it. My company’s purpose is to try to address these future problems. We need to somehow create a global infrastructure that supports those people who are going to be out of work.

There’s another concept called the technological singularity, in which we build AI smarter than us in every possible way. It will be the last invention humanity needs to create, because it will be able to think infinitely faster and better than humans.

Many scholars predict we will birth a superintelligence around the middle of our century. It will either be the most glorious thing to happen to humanity or perhaps our biggest existential threat. My concern is that if we are not cooperating as a global species by the time we create it, then it will see us as a threat and remove us from the equation.

My purpose is to steer the world toward cooperation. And that means reinventing our political and economic models, and agreeing on a new objective function for humanity. The impulse for countries to increase GDP and companies to make profits means that more and more investment will be made to drive efficiencies and profits, which is leading us to a global economic and environmental crisis. We need a sustainable objective function. And we need to get everyone on the planet contributing to it. Otherwise, we may destroy ourselves.

I don’t believe that governments are prepared or can act quickly enough. So I hope the change will come from business leaders who have a huge influence and responsibility to steer us toward a positive future.


Do you see AI having a major impact on the composition of national and international economies by altering the balance between SMEs and large corporations? And will AI necessarily favour the latter?


I don’t think that success will be dependent solely through technology differentiation. There are already huge open repositories of data. Most of the tools you need to do machine learning, optimisation and AI are open and free. The battleground is not technology, it’s talent. The companies that will survive are the ones that have a strong enough purpose that attracts talent and customers.

Prior to COVID-19, companies were haemorrhaging talent as a result of well-capitalised competition, misery-inducing organisational design, and environments that don’t empower people to innovate. How can companies invest inwardly and use technologies and concepts like decentralisation to stay competitive? Can these same concepts be used to improve our societies?

The battleground is not technology, it’s talent.

Intelligence is synonymous with adaptation. The quicker you adapt to a changing environment, the more intelligent you are. When technology is commoditised, you’ll live and die by your ability to attract talent and empower them to build innovations that keep you relevant.

This rapid change is disrupting traditional employment and technical paradigms, and highlighting where organisations operations are dysfunctional. This means that we are starting to completely rethink company structures to enable them to be much more adaptive. Aided by technologies like AI and blockchain, organisational structures are emerging that attract, retain and motivate talent in new ways.

Almost every tech conference I present at now isn't about tech. It's about talent, ethics and purpose. Tech is seen either as an enabler or something that we should be cautious of. Both of which I agree with. I suspect that over the coming years these topics and conversations will crystallise around competing operating ideologies, as governments do around economic ideologies such as capitalism and communism.

The cloud, machine learning, optimisation and true AI either already are or eventually will become commodities that everyone has cheap and immediate access to. No company competes today by their access to electricity. And no company will compete by their access to AI.

Table tennis, office dogs, beanbags and kombucha-on-tap, though joyful, are superficial employee perks that can easily be replicated. Through lack of regulation, sheer quality of vision, monopolistic powers or all of the above; the markets have rewarded tech giants with seemingly infinite access to cheap capital. You cannot sustainably out-benefit or out-pay talent. Winners of the future will be determined by their ability to attract, retain and empower a global workforce through having a strong and positively impactful purpose.

I think you have to reinvent the concept of a company.

Ultimately, though, I think the future of companies, governments and the planet is decentralised. How does a company - or government for that matter - compete with the deep-pocketed tech companies? I think you have to reinvent the concept of a company. Decentralise your innovations, enabling a global talent pool to access and contribute to them. Instead of dying from the inside, innovation can be constantly improved by a decentralised community.

I envisage a world without companies or countries where anyone can start an idea and everyone has the opportunity to contribute to that idea, whether they're a marketer, designer, engineer or strategist, and be paid fairly for that contribution.

By unlocking global talent and providing a mechanism for frictionless innovation, we hope to see the rapid creation of more purposeful organisations, ones that use technology to ensure our basic needs in healthcare, nutrition and education are met.

Do you think AI could help manage more intelligently the 160 billion banknotes that are put into circulation every year, from the moment of issue to when it needs to be determined whether they are still fit for circulation, have been altered or forged, etc.?

It's important to distinguish between what it means to use AI versus just 'using technology'. We could use smart tags and sensors to track every coin and note. But we have to ask ourselves: why?

If we want to determine whether a note is damaged, has been altered or forged, then certainly we can use machine learning to do this. If the data was aggregated, then we could use advanced analytics to track and trace currency, and extract insights that could help us decide how to manage the circulation of money more effectively.

I’m not a domain expert, but if there are real economic pains or gains associated with managing the lifespan of currency, then technology and AI could certainly be used to help, especially if humans or antiquated systems are involved in the process.


Is there a danger that AI can also be misused for criminal purposes in the currency and identity sector?


As with all technology, it can be used for good and bad. Those who disregard or circumvent regulation and the law are at an advantage, since they can more rapidly produce content and systems that cause harm.

We can use AI to do bad things and we can use AI to identify those bad things.

I like to use deepfakes as an example. Deepfakes are artificially generated counterfeit videos. Generating deepfakes is relatively straight forward, but detecting them is tricky. If I’m a content platform like Facebook or Google and remove content because I thought it was a deepfake, then I could be liable.

We can use AI to do bad things and we can use AI to identify those bad things. But if we identify it incorrectly, then we could be accountable. There’s a great book called Future Crimes that covers these issues way better than I can here.


Non-cash payments are already on the rise in some countries and the share of cash is declining. How do you see AI affecting cash usage?


Money is the lubricant of commerce. AI and blockchain are already spawning new processes and business models that take the friction out of the system. As these models become more secure, trusted and adopted, we will see a cannibalisation of competing processes (such as cash). Unless those processes also become frictionless and can help commerce in a way that digitisation can't.

We will see a cannibalisation of competing processes (such as cash).

Over the coming decades these smart devices will become exponentially smaller. Perhaps small enough to run through our veins and gather detailed data about our physiology. This data will surface insights that will enable our environment to intelligently interact with us in ways we can't yet imagine. And some people believe that it will even allow us to cheat death.

Aside from predicting when to make us coffee or when our autonomous car should pick us up from the office, our environment will learn to make our lives more comfortable by collaborating and interacting intelligently with itself.

400 million years ago the formation of the earliest eyes caused the Cambrian explosion of diverse biological life. In perhaps the same way, I would imagine that we will see an explosion of technology, stemming from the convergence of an abundance of sensory data with the ability for our environments to reconfigure and perhaps construct themselves without human intervention or guidance.

The ability to transact micro-payments frictionlessly and safely is an essential element for the infrastructure that is forming the future fabric of our physical and digital interactions.


How could AI change the control of identities, at borders and beyond? Will physical documents be replaced by biometric data and will the (human) border control officer be reduced to a silent observer or, at best, a fallback?


AI can probably authenticate you more safely and securely than passports. I suspect we are going to see a shift towards utilising data sources such as biometrics.

However, my guess is that the transition will happen more slowly where getting it wrong - false negatives – causes severe damage. It’s safer for organisations to use AI as a decision support tool, leaving the human as the final – and ultimately liable – decision maker.

Do you consider facial recognition as part of AI technologies?

If we take the weaker definition of AI, which is getting computers to do things that humans can do, then facial recognition can be considered AI. Due to a paradigm shift a decade ago, we can now get computers to recognise objects as good as, if not significantly better than, humans.

About a decade ago there was a 'Big Bang', a dramatic improvement in our ability to design and train artificial neural networks, and our ability to get a computer to 'see'.

For over 50 years, researchers have been trying to get machines to see with human level ability. The original focus of my PhD was computer vision. The neuroscience lab I worked at was interested in how bumble bees see. My task was to reverse engineer and model their visual behaviour in a machine. Bumble bees have tiny brains - a million neurons - small enough to fit on the end of a needle. Yet, until about a decade ago, bumble bees could perceive and navigate the visual world far better than any computer.

For many decades, computer scientists have tried to model biological brains in machines. Brains are made up of thousands, millions or even billions of connected neurons, all sending electrical signals to each other. This is called a neural network. Since the early 1940s, computer scientists have been building artificial neural networks.

It is broadly true, but a huge oversimplification to say that artificial neural networks operate the same way as biological brains. And excessive claims for their capabilities have led to disappointments and indeed to AI winters. But about a decade ago there was a 'Big Bang', a dramatic improvement in our ability to design and train artificial neural networks, and our ability to get a computer to 'see'.

Geoff Hinton, a pioneer in neural network research, exploited advances in the field of parallelised computer hardware, i.e. GPUs. In 2012, Hinton and his research team managed classified images more than 10% better than the best competing algorithms and demonstrated a step change in the field.

A mere 40 lines of code can classify cats with 80% accuracy. These systems struggle with some kinds of ambiguous images, but in general significantly outperform humans.

But recognising a face is typically a very small element of a complex system that is then having to make decisions and safely (and ethically) learn from these decisions. AI are systems that adapt themselves in production to better achieve their goal.


There is an ongoing discussion about facial-recognition algorithms being biased along lines of ethnicity and age. Is this the result of larger data sets available for, for instance, middle-aged white males than elderly black females? Or are biases inherent in any algorithm?


All machine learning models are biased. Neural network (often referred to as deep learning) are just one technology amongst several that fall under the umbrella term machine learning. Machine learning is not synonyms for AI.

The crucial ingredient in AI is the system's ability to adapt itself. In enterprise environments you rarely see self-adapting systems. In corporate environments, systems that utilise deep learning models are almost always pre-trained and the parameters frozen when in production.

There is a common misconception that machine learning models are 'programmed' and therefore subject to the biases and fallacies of the 'programmer'.

There is a common misconception that machine learning models are 'programmed' and therefore subject to the biases and fallacies of the 'programmer'. The widespread accusations that 'middle-class white males' are programming AI systems that embed their own bias are based on a fundamental misunderstanding.

Some techniques do depend on programmatic decisions but, for most modern approaches, models are faulty for one or more of the following reasons:

    the quality and quantity of the training data

    the usefulness of the features for accurate prediction

    the machine learning method used

    the parameters of the method

    the hyperparameters of the method.

One of my favourite stories that highlights the fact that machine learning models are trained and not programmed is the tale of the camouflaged tanks. It is probably apocryphal, but it clarifies the problem nicely.

In the early days of neural networks, the army decided to train an artificial neural network - to identify tanks that were hidden in woods.

They took pictures of woods without tanks, and then pictures of the same woods with tanks sticking out from behind trees. They trained a neural net to discriminate the two classes of pictures, and the results were impressive.

The army was even more impressed when it turned out that the neural net could generalise its knowledge to pictures from each set that had not been used in its training.

To make sure that the net had indeed learned to recognise partially hidden tanks, the researchers took some more pictures in the same woods and showed them to the trained net.

They were shocked and depressed to find that with the new pictures, the net totally failed to discriminate between pictures of trees with partially concealed tanks behind them, and pictures with no tanks.

The mystery was finally solved when someone noticed that the training pictures of the woods without tanks had been taken on a cloudy day, whereas those with tanks had been taken on a sunny day.

The net had been trained to recognise the difference between a woods with and without shadows, rather than with and without tanks!


There are concerns around the lack of explainability or transparency of AI. Can we expect advancements to overcome this difficulty, which in turn generates challenges for, among others, expert witnesses who need to explain in comprehensible terms AI-supported decisions to a judge or jury in courts?


When training machine learning models, we want to provide them with feature data that allows them to classify correctly. The programmer is not providing these rules, but does need to ensure that the conditions are right for the rules to be found by the machine learning model.

When I was modelling bumble bee brains during my PhD, some neuroscientist colleagues were probing actual bee brains to see what neurons activated when the bees were shown different images. In human brains, there are neurons that only activate if you look at highly specific images. For instance, we have neurons that are turned (trained) to activate only when we see faces. And indeed only when we see the faces of particular individuals.

Explainability has technical, social, legal and political challenges

Most machine learning models today are 'black boxes', in the sense that we do not know the details of what goes on inside them. But we are starting to build tools and techniques to probe inside them - like we did with the bee brains - to understand how they make their predictions.

Explainability has technical, social, legal and political challenges. If you’re building algorithms now that are making decisions in people’s lives, in Europe you need to be able to explain how those algorithms are making those decisions.

Unfortunately, countries and organisations that don’t have constraints may be able to out-innovate countries that do have those restrictions. Because it’s very, very, very hard to build explainable algorithms. And if there is no legislation for this. You may find unscrupulous organisations trying out systems that could have horrible outcomes, but there will be unclear jurisdictional repercussions. In a hospital, for example, is it you or the algorithm that just made the mistake?

That’s why it’s important to understand and to explain how a computer is making its decisions. And we are not there yet.


Could AI have predicted and better managed the outbreak, spread and global repercussions of COVID-19? And do you think AI could and would predict future pandemics?


For the most part, AI can make predictions and decisions that are far superior to humans. However, it is still up to the humans to form the question.


Do you believe there are sectors that will not be transformed by AI?


Every sector that has frictions and inefficiencies can be helped by AI. Some sectors will see incremental improvement. Some will experience new paradigms.

Blockchain technology is giving the world a trusted data platform. And AI is providing the means to collaborate and connect without friction. Over the coming decade we might see the emergence of a DAO (decentralised autonomous organisation)  that will allow for truly decentralised and distributed decisions and actions. I can imagine a world whereby anyone could boot up a project by launching a DAO that enabled contributions from anywhere in the world.

Every sector that has frictions and inefficiencies can be helped by AI.

The DAO is similar to the open-source movement. But in this new paradigm, anyone - software engineers, designers, marketers, accountants and even strategists -  will be able to rally around an idea and contribute to its development. Work won’t be provided for free or kudos, as in the open-source model. Instead, fiscal remuneration will be determined by the quantity and quality of the contribution.

This means that anyone will be able to contribute to a project, even just for a few hours, and they would be rewarded fairly for their work. As people worked on these open projects, the DAO captured their contribution on a public blockchain. These contributions accumulated to form a reputation that determined the rate of remuneration on future projects. People developed different rates for different skills and the rate evolved dynamically over time. You would be paid a different rate for marketing work than for software development, depending on your relative skill in each.

Many of these open projects will use digital tokens as their economic model. A Cambrian explosion of funding models will appear, such as ICOs (initial coin offerings) and other types of token sales. Selling tokens will give DAO projects the capital to get started. By reducing the waste and friction we may reach a point whereby new innovations helped ensure that everyone’s basic needs are met.

Giving everyone seamless access to healthcare, nutrition and education will mean that people had the freedom to create and contribute to DAO projects without the need for initial funding. Since digital tokens have no jurisdiction, contributors from anywhere in the world can be remunerated with the same currency. Someone in Europe who contributed the same value to a DAO project as someone in India would receive the same remuneration. And because everyone has a fair opportunity to contribute to DAO projects, there may be a rapid redistribution of wealth.

By reducing the waste and friction we may reach a point whereby new innovations helped ensure that everyone’s basic needs are met.

One of the founding principles of the DAO is that all products are open source. The creation of a completely frictionless free market, where the cheapest and best-placed people could contribute, means that toxic companies are starved of labour and customers. Efficient markets coupled with conscientious consumption could spawn tens of thousands of new organisations whose products and services are developed to meet real needs and provide real benefits.

People will be able to work anywhere they want, which could cause mass migration. Digital nomads could force governments to reassess and innovate their policies to attract and retain corporations and talent by reducing taxes, and slackening employment laws.

The freedom to work anywhere will cause substantial population shifts and re-energised communities, with people growing their own food, harnessing natural energy sources, and turning away from mass-produced or packaged solutions. This re-emergence of community after years of isolated self-interest could have a huge impact on happiness levels of all age groups.


And, finally, if any organisation or individual in our community wants and needs to learn more about AI and how it may impact their business and future, where would you recommend they start?


I believe that there only a handful of people around the globe who truly understand how to architect and productionise AI.

There has been an explosion of people calling themselves AI experts over the past several years, which unfortunately has become a catch-all term for people who use computers to extract insight from data. It seems that anyone who has done some data analysis as part of their degree or job can call themselves an AI expert.

Almost no one is actually using AI. And especially not those who claim they are.

And why wouldn’t they? The market is crying out for them and willing to pay big bucks. People are rebranding themselves as AI experts and getting hired by organisations who want valuable insights fished out of their data lakes. Unfortunately, extracting insights from data is not even half-way to AI.

Back in 2013, Duke University economics professor Dan Ariely said "Big data is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it…". I told a colleague at the time that AI would be even more hyped than big data, and that within a few years everyone would call their company an AI company.

The buzz around AI has drowned out the noise of other technologies like big data, edge computing, blockchain, and quantum computing. You can’t get through the day without coming across AI in the media and several people trying to sell you AI solutions. But here’s a bold statement: at the time of writing, almost no one is actually using AI. And especially not those who claim they are.

That’s not to say that AI isn't coming. It is. But over the past several years, hundreds of CIOs (Chief Information Officers) from some of the world's leading organisations have not only misunderstood AI, but have also been investing in the wrong technologies and hiring the wrong skills - armies of data scientists - to solve the wrong problems.

The buzz around big data peaked in 2015 and was followed by the digital transformation buzz. Big data woke everyone up to the idea that juicy insights can be found when you analyse multiple data sets. Insights that can help organisations sell more stuff, become operationally more efficient, and mitigate regulatory and governance risks.

The world's leading organisations have not only misunderstood AI, but have also been investing in the wrong technologies and hiring the wrong skills - armies of data scientists - to solve the wrong problems.

To be able to access these insights, you need to get all your data into one place – a place often referred to as a data lake. Seduced by the idea that "there's gold in them thar hills", many organisations have been investing in 'big data' technologies to create data lakes of insight opportunities. This process is known as digital transformation, which just means 'using data to make better decisions' and is nebulous enough to enable strategy consultancies to peddle old technologies as new.

Creating data lakes in this way is usually a mistake. Moreover, organisations think that creating a data lake and putting an analytics layer on top - technologies that allow you to easily generate charts - gives you the right architecture for AI. It doesn't.

Putting an analytics layer on a data lake will yield very little return. It will instead create years of technical debt that will have to be undone. Technical debt is the term used to reflect the cost of additional rework caused by choosing an easy (limited or wrong) solution now instead of using a better approach that would take longer.

Although AI has perhaps been the world’s most over-hyped technology, it is going to have the biggest and most profound impact on humanity. Governments and companies are scrambling to get there first because, as Vladimir Putin said, "whoever becomes the leader in this sphere will become the ruler of the world".


AI - are you ready for it?

Find out why your company needs to get on the AI train.

Listen to Daniel explain what the challenges and opportunities around AI are on Wednesday 24/03/2021.

Register

Registration opens on 19/10/2020.

Save € 250! Register online before 01/02/2021.

Last modified on Wednesday, 17 June 2020 09:29
More in this category: « Speaker Corner