top of page

Is AI Governance a government or corporate responsibility?



Not wanting to remove the sheer fun out of the article headline but we are going to have to start with a little bit of technical background.


Artificial Intelligence is essentially used to answer 5 questions. They are, in no particular order, “Is it A or B?”, “How much or how many?”, “Is this weird?”, “How is this organized?” and “What should I do next?”.


To put this in context, the “Is it A or B?” question is a simple 2 class classification algorithm that could be applied to hardware failure rates. For example, will this hard drive fail in the next 6 months. Years of hard drive usage and failure data could be applied, and you could get a pretty accurate answer to your question.


One of my personal favorites is the “Is this weird?” question, not just because it’s a cool question to ask, it is also a very powerful one. Anomaly detection algorithms are already a big part of your life, if you have ever been called by your bank to check a certain withdrawal or credit card spend or been asked to randomly verify your credentials by Microsoft, Google or Apple, your account has been flagged because an anomaly detection algorithm has found something “weird”. An attempted login from a location or device that doesn't match your normal patterns, for instance.


Regression algorithms are used to predict the weather or make sales forecasts using historical data with certain fixed parameters to answer the “How much or how many?” questions. Cluster algorithms analyze patterns to answer the “how is this organized?” questions and used to predict how groups of people or things related to each other, IE what types of people enjoy what types of movies.


“What should I do next?” is heavily used in automation systems where a series of questions are answered by a reinforced learning algorithm. The rather sinister name was coined from experiments done on rat and human subjects’ response to punishment and reward responses.


At this point you must be asking what the AI basics lesson has to do with governance and corporate responsibility. In short, a very high level understanding of the subject is absolutely necessary to be able to discern the dangers that may arise from their misuse.


Google has been making headlines recently and for all the wrong reasons. If ever the old adage, there is no such thing as bad publicity, didn’t hold true, it is when it applies to people's personal medical data being potentially misused. It has been recently revealed that Google, through their partnership with Ascension the US’s largest non-profit health organization, has access to over 20 million user’s medical history. Now, for obvious reasons, Google has promised that this data will only be used to provide better medical support services to Ascension’s clients, but do we believe them and is there anyway to stop them doing anything else with that data?


Now, applying our base understanding of AI, lets have a look at what they potentially could do and then try to extrapolate that forward into how that could influence our lives. Assuming you have millions of medical records that include various ailments, you also have the data for the same peoples entire search history. This is your training data. You can now apply a mixture of regression and clustering to train a model to predict the medical future of people for whom you have no medical data.


One of the key areas this is applicable is EMD, emergent medical data. This is medical or health inferences made by analyzing peoples behavioral and online speech patterns. For example, in an analysis that Facebook did recently, people that regularly use the words God, Jesus or Lord have a very predictable chance of having diabetes. This doesn’t sound so bad, does it, assuming it is used for helping the human race, but it may just be the ultimate “with great power, comes great responsibility” moment.


I am certain this is cause for concern for many and it should be, corporations do not have the best track record for doing the right thing. This data could be sold to medical facilities, offering targeted services for your future ailment, sold to insurance companies to predict your actuarial risk or even a potential employer, who could use this data to decide on whether you will be mentally and physical fit for service, in the future!


 

You can see where I am going with this, we haven’t even touched on the potential for law enforcement and I am already getting Minority report vibes. Thought crime, anyone? This piece isn’t about that though and my conspiracy theorist tendencies are pretty weak. This piece is about governance and who holds the key to the mighty engine that we are building? The answer, well, no one.


A simple google search will net you plenty of results on different countries, corporations and academic institutions, take on AI governance. I have read as many of them as I could during my research for this article, but one that stood out for completeness of vision would be Singapore’s Model Governance Framework for Artificial Intelligence. Catchy name right?


The framework is built on the most admirable and obvious overarching principles, “Decisions made by AI should be fair and transparent” and “AI Systems should human centric”.


Well, that’s obvious, isn’t it? I must ask though, who decides what is fair and to whom should it be transparent? Are we talking about fair to the individual or the whole and wouldn't transparency negate the efficacy of any predictive analysis? Butterfly effect and all. Human centric, another obvious idea but to what extent? If AI is being used to profile potential terrorists and governments are empowered to act on this profiling, what happens to the false positives? Will their treatment be humane?


I don’t think anyone will dispute the power of AI for both good and bad. Every technological leap in the past has gone through similar teething pains and the problem isn’t with the technology. The problem is with human beings. In recent conversations, I have been using guns and AI in a rather awkward simile. AI, like the gun, is just a tool. A tool that can be used to solve some of the world’s greatest problems. Data can be directly compared to bullets. Now the AI gun is loaded. Who is holding it and, if anyone, who are they pointing it at?


These are big questions and questions we are going to need to answer very quickly. Progress is being made at exponential rates and with quantum computing on the cards, that is only going to accelerate.


I would like to make a call on individuals to step up and get involved with this conversation. Independent of government and corporations. The internet and everything in it, is global and we are expecting governments with national interests and corporations with bottom-line interests to use this technology in a fair and transparent fashion. I think we can all agree that this is completely the opposite of what is happening today. We have a responsibility to keep asking the architects of the 4th industrial revolution to do better and more importantly, keep us in the loop.


Give your opinion, add your voice, this is our future and no computer should be deciding it for us.





 

9 views0 comments
bottom of page