Enterprises face a number of challenges, aided and disrupted by technology developments. As to what 2019 has in store, these accelerating technologies include quantum ledgers, artificial intelligence and machine learning.
Gigster CTO Debo Olaosebikan has produced some compelling predictions related to enterprise technology, artificial intelligence, fintech and blockchain. Olaosebikan provides these insights for Digital Journal readers.
Gigster is a company that is a smart development service leveraging artificial intelligence to combine top freelance developers and designers to create appropriate technology teams that are agile and far less costly for organizations to maintain. Gigster focuses on aiding technology startups to better gain access to the technology talent required to scale.
Quantum Ledger Database
The first prediction is that Amazon’s Quantum Ledger database will get traction and suffice for most enterprise use cases currently thought to require a blockchain. This offering is a a fully managed ledger database with a central trusted authority.
For this Olaosebikan writes: “Blockchain is one of the most buzzed about new technologies...its deeper potential may lie in its promise to establish trust among multiple parties who don’t necessarily have any reason trust each other. It does this by being a decentralized, “add only” immutable ledger that is cryptographically secure.”
Blockchain does present challenges, however. As an alternative technology Olaosebikan recommends: “What’s needed is essentially an append-only immutable ledger that is cryptographically secure and owned by the enterprise itself.”
Rise of machine learning
The second prediction involves looking to IBM, Google, Microsoft, Amazon and providers of machine learning application programming interfaces to release relatively inclusive datasets. These datasets can be used combat embedded discrimination and bias in artificial intelligence.
By this Olaosebikan reports: “Machine learning is the dominant form of artificial intelligence that is driving success in fields as diverse as speech recognition in your Amazon Alexa, facial recognition in the auto-tagging feature on Facebook, pedestrian detection in a self-driving car and even deciding to show you a shoe advert because you visited a shoe e-commerce site.”
There is a major issue underpinning this approach, however: “This seemingly innocuous approach embeds a huge problem of bias. If we blindly feed computers the labels and decisions of humans, the computer may simply replicate our biases. There is the infamous Microsoft Tay bot to remind us of this.”
Tay was an artificial intelligence chatter bot that was originally released by Microsoft Corporation via Twitter during March2016. the bot proved to be controversial, especially when the bot began to post inflammatory and offensive tweets through its Twitter account, forcing Microsoft to shut down the service only 16 hours after its launch.
There are other forms of bias as well, according to Olaosebikan, such as "The bias that comes from the data itself not being representative of the broader group we want to understand. For example, earlier this year, work by Joy Buolawumi and Timnit Gebru showed that, on the task of classifying what gender a person was, major commercially available computer vision products performed best when fed images of light skinned men and worst with images of dark skinned women. It is a massive problem if the datasets we train these classifiers on do not contain enough properly labeled people of color and do not capture broader cultural nuances irrespective of place of origin.”
The Buolamwini and Gebru research demonstrated that machine learning algorithms can discriminate based on classes like race and gender. Their studies showed disparities in the accuracy of classifying darker females, lighter females, darker males, and lighter males in machine learning gender classification systems.
As to what 2019 has in store, Olaosebikan reflects: “In 2019, we will see large companies with major computer vision products actually release more inclusive data sets openly. These data sets will be more balanced in terms of geography, race, gender, cultural concepts as well as other dimensions.”
Artificial intelligence for healthcare
The third prediction is relates to the adoption of artificial intelligence within healthcare and financial services, which will go up as products that make previously blackbox artificial intelligence decisions more interpretable start to become mainstream. As to what black box AI is? This is the idea that we can understand what goes in and what comes out, but don’t understand what goes on inside.
As Olaosebikan comments: “Life was much simpler when artificial intelligence was based on algorithms that made decisions that could be easily explained. For example, an algorithm that looks first to see if you have a headache, and then to see if you have a fever, and then concludes that you have the flu is interpretable. Regardless of whether the algorithm made the right or wrong prediction, there is huge value in the fact that it is possible to explain how it made its decision.”
As to where this will impact, Olaosebikan states: “In fields like medicine, where we might be making life or death decisions with machines, it’s clearly important that we can go back and understand why a machine suggested a course of action.”
There are also implications for finance, such as the use of an artificial intelligence algorithm to determine whether or not a person is suitable for a loan or is refused. Such decisions should not be discriminatory.
In terms of what this means in practice, Olaosebikan provides some examples: “Google artificial intelligence can predict whether you are at risk of heart disease simply by looking at your eyes! What exactly is it about your eyes? No one walks around thinking they have diseased eyes! In 2019, as startups and large companies look to drive the adoption of artificial intelligence in industries like finance and healthcare."
This article originally appeared in Digital Journal