Artificial intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans. Some of the activities computers with artificial intelligence are designed for include speech recognition and problem solving through ‘learning’.
Machine learning is a core part of AI. There are two types of machine learning, supervised machine learning (numerical regressions) and unsupervised machine learning (classification). Classification determines the category an object belongs to and regression deals with discovering functions and generates suitable outputs from the respective input
Unsupervised machine learning is learning without any kind of supervision, insights and patterns in the data are identified in streams of inputs using statistical techniques such as clustering and factor analysis. Supervised machine learning is learning with adequate supervision, using historical data records as the target variable of interest.
AI is progressing rapidly, from self-driving cars to SIRI. While AI is used mainly for helping businesses to become more efficient and more digital, there is evidence that AI can also be harmful. As we hear each day, more and more companies data are being hacked by people who are misusing AI.
There is now a growing interest in AI safety and AI ethics as:
- AI has the potential to become more intelligent than any human
- AI algorithms can compute much faster than the human brain, AI algorithms can do multi-task learning and multi-modal learning. AI has becoming more and more, part of our daily lives. For example, we have apps that remind us we need to go for our run; While running, the app tells us to slow down, as we are running too fast; or tells us to run faster, as we are running a bit too slow, etc. We are becoming more and more reliant on AI algorithms to tell us what to do.
- Another example, the other day, I was looking for a particular data science book to read, and while searching for the book, two books were recommended to me. I actually, bought, not only the book I set out to buy, but also bought the two recommended books. And guess what, the two recommended books were much more interesting than the book I set out to buy. My message to you, is that the AI algorithms are getting so intelligent that their recommendations are very accurate and we are trusting the algorithms more and more, which is a good thing. But, what happens when the AI algorithm starts learning new information much faster than you expect and starts making recommendations and doing new things that are not in your control or harmful to you or your business.
- AI algorithms on the not so good side, is that, AI algorithms are sometimes being developed by analysts who are not always properly trained in the field. The analyst learns how to code using a book or online course and then obtains a job as a “Data Scientists” but really, they do not quite understand whether the models they build are good or not good, whether the models when run on their company data, produces an optimal result or not. As long as familiar output is seen by the coder, and the overall accuracy of the model is 99.1%, the analyst thinks he has produced a model that the company would be happy to use. No testing is done, no validation is done, the company trusts the analyst and deploys the model into the market because the overall reported accuracy of the model was 99.1% by simply copying and pasting the code for their company tasks. It is usually later discovered (when the company is losing money) that the data was unbalanced and the actual accuracy (“Sensitivity/Recall”) was 25%.
- As there is a large shortage of trained “Data Scientists”, many businesses are taking the talent that they can find, usually very junior and inexperienced. It is scary that AI algorithms are being used by people who do not know or do not understand how the “black-box/neural network” works. Many analysts do not analyze the impact of the AI algorithms on the business. For AI to work well, communication on what the algorithm does, how the algorithm does it, why it does it that way, what the decision making strategies are, what is the impact to the business, how will the business operations change, have to me understood before the AI Algorithm is deployed into the market.
Yes, we are living in an exciting time, where we can obtain answers to our questions very quickly, through the use of AI. But, I think we are not asking ourselves enough questions. Questions such as:
- Do we have a framework, set of standards as to check whether the AI algorithm is acceptable or not?
- Do we have a framework, set of standard competency skills in AI that will examine and certify analysts as “Qualified Chartered Data Scientists” based on their work experience tasks and education.
- AI can be harmful. Who will be responsible for the harm? The analyst who wrote the software code, or the owner of the company who employed him to write the code, or the person who bought/licensed the software code? We have no idea how bad AI programs can become, and when it will be dangerous and out of control. It could occur today, next week or net year.
I am sure there are lots of AI experts discussing and working on the above questions and many more questions on the ethics of AI. Our lives are no longer private. AI is clearly part of our daily lives. We search for something online and within a few seconds we obtain email advertisements or social media advertisements based on our search. In some cases our personal data is sold for purposes other than the purpose it was provided for. How do we control the use and misuse of our own data?
I can go on and on…but really, it is time for us to take charge. We are living in a world that is getting disruptive with new processes, new ways of doing things, quickly and efficiently. But, how efficient are we? Doing things quickly may be good, but how accurate or how profitable are your AI algorithms?
Most AI algorithms are probably effective, particularly in the beginning. But, extra care needs to be taken to monitor, modify and in some cases abort your AI algorithms because they are no longer as effective as the were a few months ago.
In a nutshell, AI is for a large part, is good and efficient, and aids businesses with their processes and decision-making, leading to more profitable and efficient businesses. I am more concerned about the cases where AI algorithms are either used for the wrong/bad purposes and there are not enough rules in place to guide analysts and businesses on the ethics and safety of AI.
Let me know your thoughts on AI: Good, Bad or Scary. In particular, if you know of any industry where there are good ethical/moral codes of conduct for AI, do post them to me.