SXSW 2018 – Visions of the future: Blockchain, AI and Colonies on Mars
Venture Insights attended this year’s South by South West (SXSW), Austin’s annual technology conference.
This report highlights 7 key trends from the event, including: the impact of Blockchain, whether we should trust AI, how ethics apply to platforms, the battle for your voice and whether we will be on Mars by 2019!
Blockchain is driving us “towards a decentralised world wide web”; Strip out the hype and a wide range of compelling use cases remain in industries such as finance, health and media.
The time to debate the ethics of AI is now; This was the consensus of almost everyone in Austin, whether they saw AI as an existential threat or the path to human fulfilment.
Fake news is catching up with the tech platforms; Some form of regulation is now more likely than not, with markets such as Germany already moving forward.
Autonomous Vehicles will be here “next year”; So, says Elon Musk. Elon is often over-optimistic, but even the conservative Daimler AG (Mercedes Benz) expects to be selling AVs in “3 to 4 years”.
The connected home is becoming the battle for your voice; Voice assistants are becoming the glue that links you to your music, your television, your lights, your security, even your heating.
After years of hype, is Augmented Reality about to go mainstream? An Audio based ecosystem is being developed with ‘hearables’ offering an exciting new user interface.
Finally, Mars is getting a lot closer; Elon Musk is talking about the potential of regular trips to Mars by 2019, although the first colonisation is still pegged at 2030 or beyond.
Key Trends from SXSW 2018
Venture Insights attended the annual Austin based technology conference, South by South West (SXSW), which took place over the first half of March. This report outlines seven of the key trends we observed during the event:
Discussions of ‘the Blockchain’ were everywhere in Austin this year.
Cryptocurrencies have been at the forefront of global news since the last SXSW, with heavily fluctuating prices a driving force behind this. However, there are beneficial uses for blockchain outside of finance, and these are starting to become more apparent. The adoption of blockchain can be seen across smart contracts (music and healthcare), identity (passports, personal ID), the Internet of Things (data marketplaces), and digital rights management (music and film).
A popular panel at SXSW, covered “Why Ethereum is Going to Change the World” with the Ethereum co-founder Joseph Lubin taking control of the discussion. Lubin’s blockchain software company, ConsenSys, operates on the Ethereum blockchain, acting as a decentralized governance tool that can be applied to various industries.
Music and health are two industries which could benefit today, and Lubin highlighted the current partnership ConsenSys has with the music platform Ujo Music. Ujo’s platform promotes artists to register themselves as individuals and upload their content to the network allowing for upload agreements, and even usage rights to directly be managed by the artist online. The artist benefits from this by gaining more power over revenue per click, which has not been the case in past. Lubin recognised this as an issue and said “Intermediaries in the music industry, usually extract 70-80% of value flow in the industry and can even delay payments for artists. Our platform allows consumers to support artists instantly and ensure that artists get paid immediately for their work”.
This is one example of how Lubin and ConsenSys see the nature of business shifting towards a decentralised model, which he referred to as the “decentralized world wide web”, or “web 3.0”.
Figure 1. Joesph Lubin, co-founder of ConsenSys, at SXSW
Venture Insights’ take
Blockchain based tech is not a flash in the pan as some contend. It has the potential to be highly disruptive across many industries. We see real world use cases across Fintech, Media, Telecoms, to name a few and over the coming months we will publish a series of Blockchain primers. We even see potential applications across democracy itself. However, the idea that decentralised applications and systems will replace all current centralised eco-systems is far-fetched and many decentralised applications and use cases being proposed today are no better than the structures they are supposed to replace. In summary, this is a critical area that all industry players should be watching and learning from, but it remains too early to forecast just how most industries will be impacted.
2) Artificial Intelligence
Once again, Artificial Intelligence (AI) was a huge influence on SXSW in 2018. The key theme discussed was the implications of AI becoming integrated with human society. For example, as AI is built into applications such as self-driving cars and medical devices, society will begin to rely on AI to make life-or-death decisions. Various panels discussed how best to adopt and interact with AI, it’s growing impacts and the risks that AI might pose.
Bias in algorithms
A panel titled "Algorithms, Unconscious Bias, and AI" examined how ‘creator bias’ can enter AI algorithms during the training phase. The main concern was the speed and scale of the introduction of AI into everyday life before we fully understand the problem of biases in training data and how to solve them. Heavy reliance on data and the potential of AI misinterpreting certain extreme human characteristics is a problem for many people who don’t match the algorithm’s definition of ‘normal’. This topic has been covered previously at a Google conference on the relationship between humans and AI. Google’s senior vice president of engineering, John Giannandrea, indicated “it’s important that we be transparent about the training data we are using, and continue to look for hidden biases in it, otherwise we are building biased systems.”
Exploring AI Innovations
The difficulties in how AI is taught was described in a panel lead by, Adam Cheyer (founder of Siri), Daphne Koller (Stanford professor and co-founder of Coursera), and Nell Watson (Singularity University) who noted how today’s AI algorithms require millions of cat pictures to correctly and consistently identify a cat — while an infant can be trained to identify a cat with only 0.001% of that level of input. Highlighting the long way AI must go before thinking like a human can.
Elon Musk took a different tack in his Q&A session. His position is well known, but at SXSW he restated his concerns around the future of AI, “I am really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me," said Musk. "It's capable of vastly more than almost anyone knows and the rate of improvement is exponential… mark my words — A.I. is far more dangerous than nukes.”
AI’s Impact on the workplace
One of the most popular topics on AI was the impact AI is having on the workplace now and what it could do in the future. This was debated by Charlie Muirhead, CEO of Cognition X, Kate Sheerin, a Senior Policy Analyst at Google, and Tom Ward, VP of Digital Operations at Walmart. Tom Ward’s description of AI from Walmart’s perspective was that it is and will be implemented to cut out mundane tasks and increase productivity, benefiting the workforce. He described that AI is currently used for very specific use cases and where this takes place, it is designed to solve daily issues e.g. ensuring their products are positioned correctly in stores and for the right price.
AI will feed a growing planet
George Kantor a Senior Systems Scientist of Carnegie Mellon University explained how using robotics, sensing and machine learning technologies can all be used to tackle the issue of the global population growing too fast, that by 2040 humanity will not be able survive on current production levels of food. He indicated high-tech farming techniques will not work in all countries and therefore producing better seeds, speeding up farming processes by robots that accelerate the field trial period all achieved by AI, would be a solution to this issue.
Yamaha’s and Hakuhodo’s AI powered piano
Alongside panels focused on a greater understanding of the implications and future uses of AI, there were also tangible examples of AI demonstrated. Japan's Hakuhodo i-studio has partnered with Yamaha to design an AI powered piano, capable of duetting with a live piano player. The live interactive demonstrations involved a piano running Yamaha's AI tech and an accompanying screen that added a visual effect for the experience. The AI powered piano analyses the human players hand movements and sounds produced, to predict the next note the individual would play.
Figure 2. Yamaha and Hakuhodo i-studios AI powered piano partner
Another example was Nextremer’s AI Powered Robot, Gochan.
Nextremer, backed by a JPY470mn (US$4.3mn) joint investment from Bank of Kochi and Innovation Network Corporation of Japan (INCJ), introduced its AI-powered Gochan mascot which has been designed to replicate a 3D interactive version of TV Asahi’s (a Japanese Network) popular logo. Utilising Nextremer’s dialog engine powered by AI and natural language processing, Gochan was able to answer questions and even recognise and interact with different voices and accents. This sort of interactive robotic assistant is believed to be a realistic use case in airports, train stations and retail stores, by welcoming guests and answering basic questions.
Figure 3. Nextremer’s AI Powered Robot, Gochan
Venture Insights’ take
Artificial intelligence is rapidly moving into the mainstream in terms of the number of human activities that it will impact. Alongside the macro-concerns, such as Elon Musk’s around where this all ends up, self-learning machines whose decision processes we do not (in fact cannot) understand raise immediate ethical questions. In his recent book, Homo Deus, Yuval Nova Harari describes this process as the ‘decoupling’ of intelligence and consciousness. The conference highlighted the extraordinary pace of change and many speakers encouraged social scientists, psychologists and philosophers to immerse themselves in AI in order to round out the debate on where we take these technologies.
Since SXSW, the field has jumped forward again, with Google demonstrating its freakishly accurate AI audio chatbot. Google calls the technology “Google Duplex”. It is the culmination of all its investments in deep learning (DL), natural language programming (NLP) and text-to-speech. Google intends for Duplex to become part of its Google Assistant.
In the demonstration a user requests their Google Assistant to book a haircut appointment. The Duplex chatbot demonstrates sophisticated real-time speech recognition, listening to and answering questions from the salon staff member. For example, whenever the Duplex chatbot does not understand a question, it repeats the original question back to the staff member in a rhetorical form, prompting the staff member to phrase it differently, adding to the impression that a conversation is being had. It was later revealed that the phrases spoken by the Duplex were pre-recorded and that it is confined to defined conversational use cases for now. However, the most remarkable aspect of the demonstration was the ability of the AI to recreate the subtle nuances of a human voice, distancing itself from sounding robotic.
The ethical issues surrounding AI were covered heavily throughout SXSW. During Google’s demonstration, the Salon staff member failed to recognize she was conversing with an AI. This scenario raises issues around whether people should be alerted to the fact that they are speaking with a machine. See Venture Insights report “Artificial Intelligence – The next disruptive revolution”, for a more in-depth consideration of the way in which AI will impact our economy.
3) Fake news + bad actors
Fake news was very topical at this year’s SXSW, with technology highlighted as a critical factor behind misinformation being delivered and targeted to almost anyone in the world.
Sadiq Khan, The Mayor of London, was firm in his assessment that social media platforms are not doing enough to rid their websites of “hate speech, harassment and propaganda”. YouTube responded by showcasing a new feature it will be rolling out to fight the issue of fake news, called ‘information cues’. YouTube indicated it would be incorporating Wikipedia information onto the user interface, for videos which are misleading, or conspiracy related, in an attempt to give the viewer both sides of the story.
Facebook's Lead Policy Manager of Counterterrorism, Brian Fishman, highlighted their intentions to tackle counter terrorism, harassment, trolls and bots with AI and human sleuths. The main takeaway being the algorithms in place cannot tackle the issue on their own, with human-eye vital to catching well worded or messages with a non-obvious yet effective meaning.
Figure 4. YouTube – Information Cues
Venture Insights’ take
Interest in how information is targeted to individuals, the ways in which news is sensationalised and the benefits that can be gained from this, has reached a national consciousness across the globe. Governments are near the end of their investigations and we expect regulations to be placed on the digital giants. See Venture Insights report “The Influence of Bots on Fake News”, for more information.
4) Autonomous Vehicles
Autonomous vehicles (AV) did not feature as prominently as last year, feeling almost like yesterday’s news, but they are very relevant. The broad view is that they will be launched into the mainstream much sooner than many expect – even as early as 2019.
Wilko Stark the VP of Strategy for Daimler AG (Parent company of Mercedes Benz), had an important message for SXSW by stating the company would be selling autonomous vehicles in 3 – 4 years, with the aim to provide a renting service to the end user as opposed to the end user purchasing one.
German company Moovel designed a simulation which gave the user the opportunity to visualise how the sensors of an AV identify and determine what information is required to be interpreted before making any manoeuvre. Too experience this the individual was tasked to drive a slow-moving buggy whilst wearing a VR headset, which only brings up digital contour lines (generated by the sensors located on the buggy itself). The principle being, the digital lines represent the physical objects in the way, which the driver then attempts to avoid.
Figure 5. Moovel’s AV simulation through VR and Waymo’s AV
SOURCE: govtech, business Insider
Alphabet owned, Waymo, showcased its driverless cars at SXSW this year, with a short video of its cars doing test runs without anyone behind the wheel. As described by the CEO John Krafick, this is a mission they have been trying to achieve for the last decade. Waymo’s mission is to provide a driverless vehicle for all types of journeys, as seen in their nearly approved partnership with Honda to design from scratch, delivery vehicles. This is in addition to their work with Chrysler and Jaguar Land Rover to design driverless people shuttles.
Venture Insights’ take
Autonomous driving is within touching distance. Discussions of future ownership models seem to be moving towards corporate pooling models as opposed to a consumer owned model. Fixed routes are the most viable use case for first implementation, which could rival public transport offerings in cities. See Venture Insights report “Get Your Hands Off It: How a driverless future impacts Media”, for more discussion of the evolving AV market and its potential impact on the media sector.
5) Connected home
Connected home innovation continues at a rapid pace. There was much discussion around the mainstream arrival of Digital Voice Assistants. Google and Alexa were prominent, but many other companies were showing off their developments.
Yonomi’s connected home offering looks to connect everyday home functions, appliances and devices all together through their app. Doors, lights, speakers, cars, voice assistants etc. are all managed via IFTTT (a web service, to design conditional statements) and operated through the Yonomi app, which the Austin based company is not only targeting at homes, but office spaces also. Commands like, set light brightness at 50%, for a certain time or set home temperature at a desired temperature, can all be controlled through their app.
Google successfully equipped a house close to the SXSW venue, which demonstrated how their voice assistant, can be used to control, lights, blinds, appliances and even rubbish bins. The Google Home included 12 different rooms each showcasing a specific activation command which were all normal household tasks, and all of which could be completed remotely. The connected house activities were also set up for outside the house, with sprinkler systems being able to be turned on and lawn patterns that could be altered. Mood was another factor that Google played with, indicating when hungover, the Google Home could help with the “pain” by lowering blinds and controlling machines to complete routine everyday tasks, like the laundry.
Kasita is looking to open-up a new market within the housing industry by offering small affordable housing units, but ones which are equipped from a connectivity perspective. Their house unit measures in at 352 square feet and connects to Amazon’s Alexa to offer connected home features. The Kasita offering is looking to take advantage of the issue currently faced by home IoT, where there is not a one stop shop that connects all applications across one app, whilst tackling a lack of affordable homes at the same time. Kasita’s, Head of Technology, noted “It’s clear at this years show that although the number of devices and technologies continue to increase and get more powerful, they are moving further and further from being able to talk to each other and work together. Consumers must work very hard or depend on professionals to make it all work together. Not ideal.”
Figure 6. Yonomi’s, Google’s and Kasita’s connected home offerings
It is easy to see the value in a connected household. However, on the mission to reach for a true IoT world, mass adoption is being held back by a lack of compatibility across devices, appliances and general everyday objects. To date, there are very limited one stop shop options for connected homes which puts the strain on the homeowner to connect everything. It seems the most efficient way to achieve this would be to start from the bottom, when a house is built, or in Kasita’s case, manufactured.
6) Virtual Reality + Augmented Reality
Virtual Reality and Augmented Reality played a central role at SXSW, as they have done in previous years. This time round, however, futuristic but relevant use cases outside of gaming and non-live entertainment were demonstrated. With voice assistants on phones and in the home, wearable audio tech (‘hearables’) will potentially provide the next user interface with users no longer needing to be isolated from the world by wearing headphones, as seen by Bose’s AR wearable and Panasonic’s Wear Space halo phones.
VR for retail – goPuff VR
One use case for VR in retail was seen in goPuff’s demonstration at SXSW. The online delivery merchant has created a virtual environment in which the user navigates by moving their head to select items, examine the pricing and add them to your cart. Although the goPuff’s offering is a fun interactive example for now, future scenarios where online grocery shopping can be completed in this manner, appears viable. Inventory control has been an issue with this use case so far, with discrepancies between virtual and real-world stock levels. GoPuff controls all the products and knows its own inventory to prevent this from happening, unlike other offerings.
Figure 7. goPuff’s Retail VR
SOURCE: tom’s guide
AR Audio for navigation / maps - Bose
One of the talking points of SXSW, a conference not known for product releases, was Bose’s AR sunglasses. The audio firm has established a $50 million fund for Bose AR development, and has set up 11 software partnerships, including Yelp, TripAdvisor, and Strava (fitness tracking app). The Bose AR glasses incorporate both GPS data (Bluetooth connection to your phone) and head movements picked up through motion detectors in the glasses frame which allows the device to know which direction the user is looking in. Cross refencing geolocation with head direction means the device knows which landmarks the user is facing. The speakers located in the temples of the glasses allow for audio to be piped towards the user’s ears, whilst preventing noise leakage. The main benefit of this is app developers can tag locations to trigger specific audio cues, allowing for an interactive audio version of maps to be created. The motions detected can also serve as a head-based gesture control interface, allowing for skipping tracks whilst listening to music to be enabled. The glasses also integrate with Amazon’s Alexa through audio interactions.
Figure 8. Bose AR Sunglasses
VR + AR for sporting events
VR and AR for sporting events was also one of the themes at the event. With both forms of reality being to date unsociable forms of entertainment, tech start-ups such as LiveLike tried to demonstrate through their social VR offering that VR can enhance the viewing whilst also making it social. The firm has already completed 2 rounds of funding with the latest round generating US$9.6mn. Their product is a live streaming VR platform which incorporates AR to generate a virtual suite looking out over a sporting event. The options include being able to select different camera angles, access stats, highlights and replays all whilst being able to connect with their friends and other fans over Facebook. This all happens whilst a broadcasted game is taking place, and the event can even be viewed with or without a V/A reality headset. Partnerships with broadcasters and rights holders are key to the model, with UEFA, NCAA Basketball, Fox Sports, French Open tennis all collaborating with LiveLike at some point.
Representatives from NFL, NHL and Nascar all had their say on live sports being displayed through VR / AR whilst at the stadium. The belief is that there will be certain moments throughout a live event, that when putting on a V/A reality headset, the viewing will be enhanced e.g. during an NHL match, the supporter would wear the headset to then become the goalie and observe first hand, a penalty shot.
Figure 9. LiveLike – Social VR
Sony - sonic surf vr
The sonic surf suite is a chamber designed to give the impression that sound is moving around the individual. This not only allows for the user to move in the chamber and still hear the same quality of clarity but also create an immersive experience that can be enjoyed by groups of people, without headphones. This spatial audio experience was achieved by installing 576 speakers in the chamber which are all manipulated by Sony’s sound field synthesis technology. This all culminated in the “Odyssey” demonstration to give the user a VR audio enhanced experience. Sony representatives indicated the Sonic Surf VR would be available for integrators by June 2018.
Figure 10. LiveLike – Social VR
SOURCE: Audio technology, engadget
Venture Insights’ take
Whilst both VR and AR have been hot topics for a while, both continue to present an inability to provide use cases outside of entertainment and gaming. However, voice interfaces offer a plausible option for the technology to push into everyday activities and we expect the next generation of AR/VR enabled glasses to play a leading role. Venture Insights believes that a new audio-based ecosystem is starting to form. Audio will allow everyday activities that have previously been conducted on smart phones to transition to voice input plus audio output. See Venture Insights report “Hey Siri – What is the potential of Digital Voice Assistants in Media”, for more discussion of the future role of voice.
Finally, everyone in Austin this year was talking about Mars and, more broadly, the next wave of privately funded space research.
A panel which included Dr. Deborah Barnhart, former NASA astronaut Don Thomas, microbiologist Monsi Roman, and screenwriter Mickey Fisher, discussed what travelling to and colonising Mars would look like in the future. Dr Barnhart indicated that it’s going to take a massive level of effort from commercial entities and government including unified country participation, to be able to be able to get to Mars. All four participants were in unison about the earliest date of starting to colonise Mars, which they believed is likely to start by 2030.
Elon Musk’s, SpaceX, is expected to launch a mission to Mars by early 2019 as described by the CEO himself. He believed that short trips to and from Mars may be possible next year as the space exploration firm is currently building an interplanetary spaceship. The mission will be possible via SpaceX’s BFR rocket, which is predicted to cost US$7mn per launch. Elon made it clear that Mars will not be some kind of escape option for the wealthy, “For the people who go to Mars, it'll be far more dangerous. It kind of reads like Shackleton's ad for Antarctic explorers. 'Difficult, dangerous, good chance you'll die. Excitement for those who survive.
Figure 11. Elon Musk at SXSW
Venture Insights’ take
In our view, privately funded space exploration is one of the most exciting mega-trends today. We are getting close to the point where the private sector is achieving more than the publicly funded space agencies, which would have been inconceivable until quite recently. This opens up the prospect not just of inter-planetary travel and colonisation but also the permanent occupation of the Moon, space based intra-planetary travel (Sydney to London in 90 minutes), asteroid mining and privately funded space stations.
In short, for anyone who enjoys thinking about the future of humanity, these are very exciting times.