Looking Far Off Into A Sci-Fi Future

I’ve written before about the positive effects that science fiction as a genre can have on the advancement of technology. By thinking far enough into the future, explaining the details of how mankind will overcome current technological hurdles becomes far less important for most writers than thinking about the knock-on effects that these changes will have on the humans that inhabit that society (however that evolves).

I read a great post today by Tiago Forte (‘What I Learned About The Future By Reading 100 Science Fiction Books‘) that focuses precisely on this point. Here’s a few takeaways:-

  • If you want to move the species forwards, you’re not going to find inspiration simply by reading the same material online as everyone else.
  • Mankind will inevitably evolve in a manner that will cause divisions once we are forced to colonise places beyond the Earth as individuals become exposed to a vast variety of differing external stimuli according to their location and for periods measured in years, not days (such as gravity, radiation, gene pools within a distant settlement etc)
  • Once we start to travel immense distances, time will radically change everything – those who embark on an epic journey are unlikely to be the first to arrive at their destination because technology will advance during their absence that means that others will leave later but get there sooner.
  • In the same way, technology will become outdated even more quickly – one great example is the 4 megapixel camera in use on the Rosetta spacecraft that was launched back in 2004 that is now lower quality than your average mobile phone camera today.
  • When we reach the singularity, the chances are that the ‘wide’ AI that develops will not be precoccupied solely with solving the problems that we believe need to be solved today. Instead, it will likely start to seek answers to issues that we can neither comprehend nor have the language to describe currently.

Sure, these aren’t issues that are knocking on the door demanding a solution today. But progress is inevitable and it’ll be fascinating to see how things pan out (no doubt virtually, given the fact that we’re talking about a time well after ours when blogs such as this are no more than a random historical artefact).

 

Coinbase Races Past Bitcoin Industry Milestone

In amongst the various Bitcoin news stories filtering into the mainstream press so far in 2015, without doubt it’s today’s confirmation of a $75 million C Round investment into Coinbase that’s leading the way.

As a so-called ‘universal’ Bitcoin services company, the size of money going into Coinbase is big news – even outside the Bitcoin ecosystem. This is serious money. It represents 23% of the total monies invested into the entirety of the Bitcoin sector during the whole of 2014. Which in itself was a record-breaking year. The investment is way beyond the previous record single round investment ($30.5 million into Blockchain.info last year). It takes the total funds raised so far by the company formed in June 2012 to $106million.

Coinbase does many things. When it comes to merchant services, they’ve been involved in the lion’s share of the dozen or so big (i.e. $1 billion+) integrations over the past twelve months. They provide one of the most popular wallet and exchange services around. And all at the same time as building out their developer platform to provide tools which lets others build upon Bitcoin services upon their foundations.

However, it’s important to not get misled by the money. After all, it’s not unheard of for vast sums of investment to be wrongly allocated over the course of history. No, the real story here is the list of investors who’ve stepped forward to take part in this round. In addition to the existing Bitcoin champions from venture capital industry (Andreessen Horowitz, Union Square Ventures, Ribbit Capital etc), let’s take a look at some of the others that have joined the party:-

Think about it for a minute. Each of these individuals and groups has had reason to think about Bitcoin and ultimately come to the conclusion that this ‘thing’ is not going away. Think of the implications here for a second:

  • the NY Stock Exchange – presumably they hold a positive view on the inevitable regulation promised any day now in the form of the revised BitLicense?
  • a telecommunications company in the world’s third largest economy seeing the opportunity for mobile payments?
  • a multinational bank discovering the potential for cheaper money transfers?

The reality is that the individual motivations matter less here than the collection of names that put collective hands in pockets and pen to paper to finalise this deal. These are in no way shape or form ‘traditional’ Bitcoin cheerleaders. Not by any stretch of the imagination. I’m in no doubt that this represents the start of moves by the traditional financial services industry which will only accelerate during 2015. I didn’t really expect this to happen until the first regulations were finalised.

Of course, if they were to invest in any business within the Bitcoin ecosystem, it was always likely to have been Coinbase. They’ve always been relatively open to regulation and allowing the traceability of customers’ actions in a way that many (smaller) businesses in the scene haven’t. But any way that you frame it, this is big, big news for the industry.

The money’s likely to be spent on executing the next stage of that plan for global domination. It’s great to hear CEO Brian Armstrong’s goals to scale the services up to 30 countries by the end of 2015. So it’s as good a time as ever to leave you with the recent talk by co-founder Fred Ehrsam in which he quickly sets out Bitcoin’s potential.

Bitcoin and the Trust Web

I always try to share something that I found either valuable or interesting every day on this blog – even if it’s only of interest to me. Today, however, I’m pretty sure it should be of interest to everyone as the standout article was undoubtedly the one by David Cohen and William Mougayar published on Techcrunch entitled, ‘After the Social Web, Here Comes The Trust Web‘. William published one of my favourite articles on Bitcoin last year (here) and it’s great to see Techstar‘s Cohen also co-authoring the piece.

The article does a great job of summarising – and simplifying – the value of Bitcoin and the promise of blockchain-related developments for the newcomer. It frames the innovation that’s taking place as unstoppable force and one that represents a renaissance for technology, computer science and cryptography.

I couldn’t agree more.

It also helpfully distinguishes between the most visible revolution (the new manifestation of money) and the less-visible but even more important revolution that the blockchain is kickstarting, not least by virtue of the fact that the technology creates a new way to write software.

In case you don’t have time to read it (you should), here’s a few key points:-

Focus on new models, not old

If you truly want to see innovation in action, don’t see how Bitcoin fits uncomfortably within the existing paradigms. To understand the real potential, look at how Bitcoin and blockchain technology is forging a brand new path. Tackling regulatory concerns and obstacles placed by incumbents is ultimately futile in their view. Bitcoin does not require permission. Technology is neutral by definition and in this case simply functions as a low-level protocol (a set of rules that govern how a network communicates), just as the Internet relied on the TCP/IP protocol. The real magic happens when clever people build useful things on top of it.

“It’s better to invent new things instead of fighting all things, and it’s easier to create new systems that circumvent the old ones”.

The concept of decentralised consensus

Normal practice over the years has been to rely on one database to confirm whether a transaction is valid or not. With Bitcoin, the authority and trust that would otherwise be in one centralised database has been transferred to many nodes. These nodes record transactions publicly and sequentially, with the technology (cryptography and blockchain) ensuring that no duplication of recording takes place within the decentralised network.

The future of money?

Bitcoin provides a solution for half of the world that remains unbanked, with a simplicity that has been proved to be valuable by well over half of the population in Pakistan and Kenya (in easypaisa and M-Pesa respectively). Whilst legacy banking either can’t or won’t service this demographic, the $400bn + remittance industry will also be disrupted by the cost savings that Bitcoin brings.

In addition, with legacy infrastructure costs making small online payments uneconomic, Bitcoin and cryptocurrencies in general represent a far more cost-effective way of sending small amounts of money – whether that involves tipping or donations.  It’s hard to imagine anyone wanting to pay more over the medium-term when they have an option.

Finally, it’s a fact that when the Internet was created, there was no native currency created that could work in an integrated fashion with it. Bitcoin represents that solution.

Smart Property / Digital Rights

An asset that knows who owns it is known as ‘smart property’. By recording ownership on the blockchain, suddenly you can use your unique cryptographic signature to prove your ownership beyond doubt. Others can then confirm that you own this property by simply checking the blockchain, whilst your explicit consent is required before any such rights of ownership can be removed from you.

It’s not hard to understand just how powerful an auditable database is likely to be that enables you to “establish a persistent link between [your] identity, reputation, a digital file and its meta-data“.

Smart contracts based on proof of work

In one sense, proof of work is simply a right to participate in the blockchain system. Using the technology that underpins Bitcoin, it’s possible to create contracts that are self-executing, relying on the blockchain for verification as opposed to any centralised judge or court, for example.

Decentralised peer-to-peer marketplaces    

These days, semi-centralised businesses such as Amazon, eBay and Uber rule their niches. With Bitcoin and the innovations that are currently being developed within the ecosystem, person-to-person marketplaces are evolving quickly that make the role of middleman redundant. The nature of this technology is such that trust is not required for a transaction to take place between two peers and therefore this cost is saved.

As pointed out in the article, one of the fascinating areas of decentralisation technology comes from the shift that we are collectively starting to experience: we previously relied on the centralised organisation for a wide range of things that are now being delivered by the decentralised marketplace (think of trust, rules, identity, reputation and payment choices).

Cryptoequity and DAO’s

I’ve touched on both before but I’ll save the detailed explanation of the possibilities of issuing a reimagined form of share equity that is recorded permanently on the blockchain or Decentralised Autonomous Organisations (businesses that run autonomously without human involvement according to a strict set of rules enforced by software) for another time given the complexity of both topics. But it’s worth noting that both have the power to completely restructure certain industries.

Decentralised identity

As has been previously noted, we have a problem with online identity. Millions around the world currently rely on centralised institutions to log-in to services, mostly for the simple fact of convenience (Facebook or Twitter log-in’s, for example). But of course, doing so provides such companies with extensive data about our activities and interests that they then monetise. Bitcoin provides the start of a solution – an alternative way in using the blockchain to decentralise (and – crucially – control) our own identities and reputations.

It’s great to see a mainstream website such as Techcrunch put out an informed and detailed article about the potential in the area. Hopefully, there will be many more to follow. There’s no doubt it’s been a bumpy start to 2015 in the court of public opinion when it comes to Bitcoin – but then again, what’s new?

 

Cameron’s Crypto Folly

Earlier this week, David Cameron came out with a spectacularly ridiculous statement by promising that, if re-elected as Prime Minister following the 2015 General Election, he will push to ban end-to-end encryption. To counter the threat of terrorism, Cameron stated that there should be “no means of communication” that “we cannot read”.

Given both the prevalence and importance of encrypted communication online, it was a ridiculously naive statement to make. Being charitable, I suspect that it was simply an attempt to garner support early in an election year from the general public who understand few of the details about how modern technology works in practice.

As ever, Cory Doctorow wrote a detailed and insightful piece explaining the futility of such a suggestion, as did Professor Bill Buchanan who succinctly set out an explanation of why encryption is so important for us all.

Encryption is a core service on the internet (think of online shopping, banking and messaging for starters). It’s crucial for both the security and privacy of individual’s communications.

It appears now that the Prime Minister’s office might just have realised the futility of this suggestion. Let’s hope so.

 

Cyberwarfare Reporting

It’s looking like we’re now truly entering into the age where cyberwarfare is becoming commonplace. For example, I read today that the US and the UK have agreed to carry out cyber attack war games on each other as part of an attempt to strengthen their defences, firstly against an attack on the financial markets. Once you start to think about the fallout from the Sony/’The Interview’ story which dominated the end of last year, the impact of Stuxnet and the more recent attack on a steel mill in Germany in which safety systems were supposedly overridden, it’s clear that these types of events are either becoming more frequent or being reported on more regularly by the press – or both.

But as a recent article points out, the reality is that despite the huge significance of these events it’s almost impossible for the press to actually report accurately on such stories because:

  • cyber warfare doesn’t have physical troop movements that you can report;
  • a government is under no obligation to tell journalists what they’re doing in a cyber war (unlike in a ‘real’ war where the obligations are higher – in theory at least);
  • when it comes to cyber warfare, there is no way to be certain who is doing what and why.

For example, with The Interview last month, journalists were struggling to report the story – the evidence that North Korea was responsible seemed somewhat flimsy but they clearly had a viable motive motive. As did the US for whom the creation of a “cyber bogeyman” to justify increased online surveillance in general was a gift.

In essence, journalists have an overriding obligation to report the truth yet they have no way of finding out what the truth is with the current system. And when the consequences are as serious as imposing sanctions on another country and reclassifying it as a terrorist state, the risk for  journalists in being forced to rely on government stories can only become even greater.

Liquifying the Physical World with the Internet of Things

Yesterday I outlined various factors that are hampering the growth of the Internet of Things. However, there is a solution in the form of the blockchain, as IBM have identified. To recap, blockchain technology could elegantly solve the problems that lie ahead for a number of reasons, not least because it introduces:-

  • one definitive ledger that records every possible transaction permanently;
  • peer-to-peer technology that massively reduces cost and increases security by removing centralised, expensive and vulnerable organisations;
  • increased computing efficiency as P2P utilises idle processing power from around the network;
  • enhanced privacy as details are no longer surrendered to organisations acting as pinatas to hackers.

In this second post, I just wanted to finish with a few areas that the Internet of Things is likely to disrupt moving forwards and why it is therefore so significant. Again, kudos goes to IBM for setting this out so clearly in their report.

Disrupting the Physical World

In today’s world, buying digital content using the Internet has become entirely normal for most. But the Internet of Things is now looking to turn the physical world into one that is “as liquid, personalised and efficient as the digital one“.

The report identifies five key areas for disruption that will drive the transformation resulting in “the liquification of the physical world”.

1. Unlocking excess capacity of physical assets

The Internet has let us find, use and pay for digital content (books, music, films etc) instantaneously and the public has become increasingly comfortable doing so. But the Internet of Things provides us with the ability to find, use and pay for physical items.

With physical items (rooms, vehicles, whatever) online and actively updating systems as to their availability directly, the opportunities to monetise these under-used resources is huge. Just look at the growth of the Collaborative Economy,

2. Creating liquid, transparent marketplaces

Driven by mobile and social networks, demand will release supply that was previously constrained in the rapidly expanding peer-to-peer economies.

3. Radical re-pricing of credit and risk

All this monitoring will mean that each individual will start to receive customised credit according to their life, history and situation. Rather than relying on one-size-fits-all models, people who were previously excluded from enjoying this will now be able to access consumer credit that is priced reasonably. Money will flow in areas that it has never been able to before.

4. Improving operational efficiency

The sectors that have traditionally been slow to incorporate technology are likely to experience the most significant changes with the Internet of Things. The report highlights farming in particular. It currently requires significant capital expenditure and technology but could benefit from a wealth of data that sensors across equipment, weather conditions, field monitors and many other areas could revolutionise the industry overnight.

5. Digitally integrating value chains

Rather than losing valuable time waiting for a replacement to be shipped after an object breaks, sensors will monitor performance of objects and automatically source, negotiate the price and take delivery of replacement parts and therefore minimise any down time. There will be a rise of connecting ‘recipes’ between services and products (look at IFTTT for a simple example of the value in the concept)

The Importance of Design Thinking

As with every revolution, the key is utility and not techno-wizardry. The IoT will be driven by an improved user experience and improved functionality in devices. The fact that a device is connected to the internet is irrelevant to most people. Compare that with cookers that turn down the heat when the pot boils over or smart toasters that cook toast according to your preference  – in these cases, solutions that increase safety and quality of food will be appreciated by users and drive the growth of the IoT.

With the IoT, each device should be acting in the best interests of its users and not third parties. Crucially, the machine-to-machine communication should be invisible to most users whilst designs of interfaces that deal with machine-to-human interfaces must be designed to facilitate far greater degrees of interaction.

I’m going to wrap up by simply copying the report’s suggestion for the sort of businesses that will do well in this brave new world. Put simply, they are most likely to enjoy success if they do the following:-

  1. Enable decentralised peer-to-peer systems that allow for very low cost, privacy and long terms sustainability in exchange for less direct control of data.
  2. Prepare for highly efficient, real-time digital marketplaces built on physical assets and services with new measures of credit and risk.
  3. Design for meaningful user experiences, rather than try to build large ecosystems or complex network solutions. 

In the near future, I’ll be delving into how IBM’s thinking has evolved (in the form of the Adept project). But in the meantime, given the fact that predicted growth of the IoT means that it will affect each and every one of us, I hope you found something of use in the last couple of posts.

Why The Internet of Things Needs The Blockchain

Back in September last year, IBM published a paper titled ‘Device Democracy: Saving the Future of the Internet of Things‘. It’s a fascinating document from a couple of angles – partly because of the simple promise that the Internet of Things (IoT) holds for all of our futures – but also because it identifies the blockchain as being a vital part of that future.

I’ll try to summarise the paper here in a couple of posts starting today (part 2 here). Apologies for the length but there’s some great points in there that I wanted to capture.

The Promise

The report points out that our current version of the Internet of Things is not fit for purpose. There will need to be some significant developments before we get to the stage where hundreds of billions of devices are connected to the internet. And there’s sound reasoning that suggests we will be dealing with those sorts of numbers.

Thanks to a combination of technological advances accelerating the overall development, we’re experiencing a drastically continual drop in the cost of computing power. Historically, such price drops have resulted in a rise in the number of units using that increased power by an order of magnitude. One effect of this is that whilst we currently rely on application-specific computing within devices, the costs are so low that we can now afford to make such embedded computers general purpose, as opposed to specific.

It is protected that the number of connected devices will rise from 2.5 billion (2009), to 10 billion (2014), to 25 billion (2020), to over 100 billion (2050). Drivers behind this growth include the inexpensive nature of sensors (to the extent that they can even be disposable if necessary), cloud computing that can store and analyse vast amounts of data, increasing levels of connectivity, increased availability of API’s and a huge growth in 3D printing that will enable people to manufacture devices in smaller batches.

It goes without saying that some industries – specifically the ones that have never traditionally been IT-intensive (e.g. agriculture, transportation, storage and logistics) are particularly ripe for disruption.

The Problem(s)

We’ve been focusing on the high value applications so far with the IoT (monitoring of jet engines, automated smart meters) but demand for this is slow. Partly that’s down to the ecosystem – for example, only 30% of heavy industrial equipment is networked and only 10% of smart TV’s are used for internet viewing.

1. Cost of Connectivity

We use expensive, centralised clouds and large server farms, not to mention many expensive middlemen.

2. The Internet post-Trust

Relying on centralised systems that use trusted partners is not viable in today’s post-Snowden world as such a structure gives the opportunity for any third party to gain unauthorised access. In any event, building trust at the scale required by the IoT is both impossible and expensive. Furthermore, it’s essential that privacy and anonymity are integral to the system. Closed source (‘security through obscurity’) is obsolete and must be replaced by open source solutions (‘security through transparency’).

3. Not future-proof

When you change a mobile phone every two years, it’s not really a problem if the tech gets outdated. But it’s a different story if you’ve bought a car that’s meant to last for 10 years. Therefore the ability to carry out software and hardware updates becomes crucial when dealing with these types of real world objects.

4. Lack of functional ‘value’

As the report points out, a smart, connected toaster is of no value unless it produces better toast. You can’t expect something to be ‘better’ simply because it’s now connected to the internet. It has to have a real purpose.

5. Broken business models

The lack of profitable business models within the existing implementation of the IoT means that fewer businesses will try to build out the market.

The Solution(s)

 “The foundation of modern computing is the very humble work of transaction processing”

Everything we do on the internet is, in effect, a transaction that is processed, recorded and stored. That includes buying tickets, making phone calls or anything else. But it’s also much wider. In fact, every digital interaction is a ‘transaction’. For example, there are currently 5 billion social media transactions processed every single day. Each one of these is a transaction. And if we’ve got that many transactions taking place today, just think of what will happen when the Internet of Things really gets going – the numbers of transactions that will need to be processed is going to explode.

With computing power becoming both greater and cheaper, the unused power of all of the connected devices around the world must be able to be harnessed instead of sitting idle. Using peer-to-peer computing will save significant costs by removing the infrastructure (the centralised data centres).

But it’s not just peer-to-peer itself that provides the answer to the growth in transactions with the IoT. The system must also be trustless. Basically we need a system that does not rely on trusting others and one that avoids a single, central point of failure. The paper suggests that means we need:-

1. Peer-to-peer messaging protocols: providing highly encrypted and a private-by-design, trustless messaging capability (between devices)

2. Secure distributed data sharing: replacing cloud-based file storage with direct file sharing (such as BitTorrent)

3. A way to coordinate all devices that ensures that they can validate transactions and reach consensus – yup, you guessed it – it’s time for the blockchain.

Blockchains and the Internet of Things

To recap: a blockchain is simply a digital ledger that records every transaction made by every participant. Transactions are verified by cryptography and many participants who are then rewarded for carrying out that verification. With many participants confirming transactions (known as reaching decentralised consensus), the blockchain eliminates the need for a trusted 3rd party institution to carry out that function.

As a transaction processing tool, the great benefit is that a blockchain enables the processing of transactions and coordination between devices that are involved. For example, users can choose to set permissions on the devices in the IoT to enable them to act in response to their location or the time etc. Also, further rules could be delegated to let the network decide on certain actions that the device carries out (this agreement happens if over 50% of the network agrees) – every device could agree to download an approved software updated, for example.

Using a blockchain, individual devices could then autonomously execute contracts (basically agreements and payments) with other devices. This unleashes unlimited opportunities for a whole new type of business model on the world – basically introducing machines as economic actors, as I touched upon in my TEDx talk a year ago. Every device can now run itself as an individual business, making decisions about how to share its processing power and other economic resources in order to make decisions that provide it (and – in the initial stages at least – its owners) with the most beneficial outcomes.

The blockchain structure also allows manufacturers to basically hand over responsibility for support and maintenance to a set of self-maintaining devices. In other words, rather than facing the prohibitively expensive prospect of having to support billions of devices, manufacturers can still build businesses to create such devices without shouldering those costs.

And the best part? Users control their own privacy. Because devices are making their own decisions and there are no centralised places for third parties to attack, we see a fundamental shift in the existing dynamic.

“In this new and flat democracy, power in the network shifts from the centre to the edge”

I’ll follow up with part 2 tomorrow – thanks for making it this far!

Engagement v Numbers

Just before Christmas, Instagram announced that it had hit the milestone of 300 million monthly active users. Not bad for a 4-year old company and, as many commentators were quick to point out, one that now moved in front of Twitter in terms of users (‘languishing’ with ‘only’ 284 MAU’s towards the end of the year).

There was a robust response from Evan Williams however who rejected the importance of such numbers. He quickly pointed out one very valid point in response – lining up a photography site as a competitor to a real-time breaking news network is hardly credible. When viewed subjectively about their impact on the modern world, the two are in fact entirely different and incomparable.

However, he then published an essay on his site Medium which sets out a crucial point for anyone who has is running some kind of web-based business. As he points out, the web took a wrong turn back in the 1990’s and pursued an advertising-driven model. Even the inventor of the pop-up ad Ethan Zuckerman has come out against what has developed as the norm. The reality is that the usual aim of websites is to do anything that will help them in their fight for eyeballs that can be delivered to the advertisers.

On this basis, if you’re looking to maximise the number of visitors, page views would tend be a key metric. However, building your diagnosis around a system that rewards a visitor who simply flicks through a number of pages successively without spending any time to digest the information on the page has many flaws.

Williams suggests that instead Total Time Reading (TTR) is a far more accurate indicator of success when it comes to his site Medium. He makes the great point that the most valuable resource in today’s over-crowded digital arena is that of time. Therefore, we should be ignoring the Click Web model and pursuing an Attention Web instead. TTR is not perfect but, as others have pointed out, there’s no God metric when it comes to measuring the effectiveness of online content.

In short, don’t just focus on the hits. Measure the depth of engagement rather than the breadth and ultimately your advertisers – and much more importantly your users – will thank you for it.

A.I. and Summoning The Demon

I'm sorry Dave. I can't do that.
I’m sorry Dave. I can’t do that.

“If a superior alien civilisation sent us a message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here – we’ll leave the lights on”? Probably not – but this is more or less what is happening with AI.” (Stephen Hawking)

Recent months have seen the debate around the future of artificial intelligence start to reach the mainstream press. No longer simply the preserve of sci-fi authors alone, there now appears to be more of a concerted effort being made to publicly co-ordinate research streams and inter-disciplinary expertise to see whether mankind really is, as Elon Musk suggests, “summoning the demon“.

Yesterday an open letter was published by the Future of Life Institute to publicise a pledge by top experts around the globe to coordinate progress in the field of A.I for the benefit of mankind. It was published in association with a research document which highlights a few areas that researchers should be focusing on in order to achieve this goal over time. In short, the argument is that work should be directed towards maximising the societal benefit of A.I. instead of focusing on simply increasing the capabilities of A.I. alone.

As the letter says: ” Our AI systems must do what we want them to do”.

FLI’s Research Areas

As small improvements are made, the potential monetary value of each step forward in this area could be significant, prompting growing investment into research in turn. But that’s hardly surprising – given the fact that the entirety of the civilisation that we know today is the product of human intelligence, the potential benefits of A.I. (which after all is simply intelligence magnified at scale) could easily be far beyond our current imagination. Research should be directed to ensure that there is significant societal benefit derived from the powers that are harnessed.

When it comes to short-term areas of interest, the FLI suggest the following:-

  • Assess the impact of A.I. on employment and the potential disruption that it might bring.
  • Consider how to deal with the displaced employees who may no longer have a job with the advent of such technology.
  • Develop frameworks for the exploration of legal and ethical questions by:
    • involving the expertise of computer scientists, legal experts, policy experts and ethicists;
    • drafting a set of machine ethics (presumably on a global, as opposed to national, basis);
    • considering the impact of autonomous weapons and what having “meaningful human control” actually represents;
    • assessing the extent to which AI will breach privacy and be able to snoop on our data and general activities.
  • Ensure that all short-term A.I. research focuses on:
    • verification – build confidence that machines will act in certain ways, particularly in safety critical situations;
    • validity – a robot that hoovers up dirt before simply dumping it and repeating may be efficient but is of little benefit to mankind;
    • security – as A.I becomes more prevalent, it’s increasingly likely that it will be targetted in cyber-attacks;
    • control – determine what level of human control is necessary or simply efficient (e.g. when sharing tasks with machines).

Over the longer-term, the suggestion is that research should look into such issues in light of the potential that A.I. has to evolve such that a system starts to actually learn from its experiences. This introduces the concept of an intelligence explosion – in effect, the way that a system can modify, extend or improve itself, possibly many times in succession. In many ways, it is this idea that represents the demon that Musk, Hawking and others warn us about in such stark terms. As Stanford’s 100 Year Study Of Artificial Intelligence points out:

“We could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes….such powerful systems would threaten humanity”

Don’t Worry (Yet)

It’s worth noting that there are also plenty of voices who maintain that the singularity is not that near. There is a huge difference between so-called ‘narrow’ AI (intelligence that enables certain specific tasks to be carried out, such as autonomous cars) which tend to have fairly short timelines to success and the much harder ‘wider’ or general AI (machines with intelligence that replicates human intelligence).

As Ben Medlock of SwiftKey points out in a recent article, the field of artificial intelligence is characterised by over-optimism when it comes to timescales because we always underestimate the complexity of both the the natural world and the mind. As he points out, to surpass human intelligence, a truly intelligent machine must surely inhabit a body of sorts, just like a human, so that it can experience and interact with the world in meaningful ways from which it can learn. This concept of “embodied cognition” is remains a long way off.

On one hand, it’s clear that the narrow AI is becoming more common. We’re all seeing the evidence on our smartphones and in technologies that are starting to appear around us. No doubt this will be accelerated by a combination of the internet of things, the final move to the cloud and the evolution of powerful algorithms that will naturally develop in accuracy with the related upsurge in available data being collected. But the self-optimising artificial intelligence which evolves at a pace far beyond that of mankind’s biological restraints remains an issue that is firmly to be dealt with in the future.

The key thing now however is that the debate has evolved from being a topic for debate amongst academics alone. And in light of the vast potential that such technologies bring towards solving some of the biggest issues that we face, including everything from the eradication of disease to the prevention of global warming, whilst also representing what might very well turn out to be the greatest existential threat mankind has ever faced, there’s no doubt that that’s a good thing.