Skip to main content

Christopher Marks

The importance of ethics in the governance of artificial intelligence

8 min read

Artificial intelligence and machine learning techniques have the potential to do a lot of good for the world. Extending beyond the achievements of DeepMind to beat the world's Go Champion in March last year, it is currently playing a role in improving the fairness of the insurance industry through companies like Lemonade, improving primary health care treatment through companies like Remedy, and making it possible to live in a future where we have self-driving cars, smart AI assistants, and highly-detailed personalised education for our kids. It is also being used in ways we don't necessarily notice or understand; curating our news feeds on Facebook, suggesting new music to us on Spotify, and profiling us for crimes we have yet to commit. On the flip side, AI researchers have been discussing the threat of the singularity, where AI could surpass humans as the most intellectually sophisticated entities on the planet. Regardless of its final applications, it is critical to encourage the union of the worlds of the computer scientists and machine learning experts and the humanities; specifically lawmakers, sociologists, psychologists, economists, philosophers, anthropologists, ethicists, and more. There needs to exist an interdisciplinary approach to creating regulatory frameworks so that we can ensure AI is leveraged to benefit humanity, rather than as a means of control.

Lucky for us, there's a bunch of brilliant people working on it

OpenAI was founded on the principle that AI should be advanced in a way to benefit humanity as a whole unconstrained by the need to generate financial return. The Ethics and Governance of Artificial Intelligence Fund is an attempt by the Knight Foundation, Reid Hoffman, Pierre Omidyar, the MIT Media Lab, and the Berkman Klein Center amongst others, to encourage transparent, cross-disciplinary research into how to best manage AI as well as understand its broad effects on humanity. Stanford is conducting a One Hundred Year Study on Artificial Intelligence (AI100), which is a long-term investigation of the field of AI and its influences on people, their communities, and society. AINow published a comprehensive report on the social and economic implications of artificial intelligence technologies in the near-term focused on the themes of healthcare, labour, inequality, and ethics. And there's more, which the Berkman Klein Center has compiled into a handy list on their website here.

But what are some of the main issues facing AI, and what role should academia and other institutions play in guiding a beneficial future?

Julia Bossmann, who is the President of the Foresight Institute, believes there are 9 top ethical issues in AI:

  1. Unemployment at 'the end of jobs'. Self-driving cars for example could put millions of truckers out of work, but could lower the risk of automobile related accidents. Automation could also mean people are able to work less hours so that they have more time to spend with their families and engage with their local communities. Others argue that AI technologies will lead to 'mass redeployment' much in the same way that the industrial revolution lead to a shift from agricultural living to cities. However, automation is likely to create new roles for highly skilled workers and not low skilled workers, so some are arguing for a universal basic income to ensure the livelihoods of displaced workers.
  2. Inequality and how we distribute wealth created by machines. Our economy is currently centered around compensating people for their time and contribution to the economy (broadly speaking). With automation, there'll be less of a need for a traditional human workforce and revenues will go to fewer people, so it's important to think about how we can ensure the benefits of AI are spread to all of humanity.
  3. Humanity and how machines affect our behaviour and interaction. AI could be used to nudge people towards more beneficial behaviour, but could also be used to manipuate people. How our kids interact with human-like AI could also affect their development.
  4. Artificial stupidity and how we can guard against mistakes. It's critical to make sure that the machines perform as intended and aren't able to be manipulated for people's own gain.
  5. Racist robots and how we eliminate AI bias. AI systems are created by humans who can be biased and judgmental, so it's important to avoid algorithms that behave in detrimental ways e.g. racially profiling when predicting future criminals.
  6. Security and how we keep AI safe from adversaries. Cybersecurity wars will escalate should AI get into the hands of people with malicious intent.
  7. Evil genies and how we protected against unintended consequences. AI is only as good as the data it is given, so it is important to inject human judgment into the results it returns e.g. avoiding solutions where we eradicate poverty by killing all poor people.
  8. Singularity and how we stay in control of a complex intelligent system. As human evolution and dominance of the planet stems from being smarter than other animals despite our inferior physical prowess, it will be vital to manage AI if it becomes smarter than human beings. DeepMind are in the process of developing a 'kill switch' so an advanced form of AI will be unable to avoid being shut down.
  9. Robot rights and how we define the humane treatment of AI. Consideration must be paid to how to treat AI legally when machines become able to perceive, feel, and act, much in the same way that animals have rights.

Urs Gasser, who is the Executive Director of the Berkman Klein Center, sees there being 5 roles that universities will play when it comes to the ethics and governance of AI:

  1. Supplying open resources for research, development, and deployment of AI systems, particularly those in public interest and for social good. Commercial and nation-state interests in the deployment of AI will likely mean that it won't be open forever, so it is important for universities to ensure access to AI resources and infrastructure over time e.g. computing resources, data sets that play a strategic role in ML.
  2. Access and accountability. Universities have the capability to act as independent/public interest-oriented institutions that develop a means of measuring and assessing AI systems' accuracy and fairness. We need new methodologies to understand the black box of how these algorithms are performing, as it is sometimes the case that it is hard for even the creator of the algorithm to determine what the decision-making process of the algorithm is.
  3. Social and economic impact analysis. Universities can establish methodologies and determine suitable review and impact measurement factors. It's important to understand what these technologies are doing to society, and how we can ensure that our knowledge base survives and expands over time.
  4. Engagement and inclusion. Universities can bring together various AI stakeholders who may otherwise not have been willing to engage in dialogue, because they'd be in direct competition with one another. BKC has also discussed developing an inclusion lab, which would explore ways in which AI systems can be designed and deployed to support efforts aimed at creating a more diverse and inclusive digitally networked society.
  5. Translator. Universities will be able to act as a translator by communicating the implications, opportunities, and risk of AI from the relatively small group of experts who understand the technology to the public at large.

He concludes by emphasising the importance of closing the divide between engineers and computer scientists and the humanities, social scientists, policymakers, and ethicists. He also underscores the role that universities will play in developing AI for public good:

From the perspective of the university, the wave of AI that has washed over the globe has sparked great opportunities. More importantly, technological developments have underscored the responsibilities and indeed, idiosyncrasies, that endow universities with the unique ability to act as providers, conveners, translators, and integrators, to leverage artificial intelligence in the public interest and for the greater good.”

The Berkman Klein Center and MIT Media Lab have also jointly created a video series about the ethics and governance of AI which can be found here. Topics range from how we should ethically design AI systems that complement humanity, how AI could pose threats to civil liberties & democracy, pose developmental challenges for our kids, be injected into education and personalised learning, and how the development of AI will need to be open and have oversight.

There's a lot of work still yet to be done, and opening the dialogue between different fields of researchers, industry, and the government is a necessary step in the right direction.

Christopher Marks

First real life Machine Learning project completed!!

1 min read

I finished the first (technically second if you include the optional exploratory analysis) project for the Udacity Machine Learning Engineer Nanodegree I'm enrolled in. It feels more like a mix between a comprehension test and an actual project, but either way I'm super stoked about it and the rest of the projects left in the course.

This project focused on building a model that could accurately predict housing prices in Boston, and came from the module about Model Evaluation and Validation: http://kitmarks.com/boston_housing.html

Next up, Supervised Learning.

Christopher Marks

A thoughtful gift

1 min read

Got on a busy tube and accidentally stepped on a guy's duffel bag. Apologised immediately. He says "what?" I say, "Sorry for stepping on your bag." He goes "that's okay."

"There's a valentine's present for my girlfriend in there."
I start feeling really bad.
He then pulls out a pink dildo in its packaging.
"Do you think she'll like it?"
"It cost me £50"
I laugh. He then pulls out two cans of beer from his duffel bag, cracks them both open, and says "Come on, drink with me"

THIS
IS
REAL

Christopher Marks

"If we do that, we can add a kazillion dollars to the bottom line!"

1 min read

"Whoaaa, Jeff!" *high five* "Aaaalright, that sounds fantastic!"

Watching the Enron documentary called 'The Smartest Guys in The Room' and the above clip of Enron's own skit about 'hypothetical future value accounting' cracked me up so much.

edit: less funny realising PGE linemen put their entire 401Ks in Enron stock

Christopher Marks

Christopher Marks

Do you have any questions for us?

4 min read

Interviewer: Well, that concludes this part of the interview process. Before we proceed, do you have any questions for us?

Candidate: Sure do.

Here’s a question to help me understand the culture of the company. A manager walks over to her associate, who’s brewing a coffee at 6:45am. She forgot the VP arranged a meeting with a major client, so she needs a revised report on her desk by 10am. The report would just need the most up to date financial data, but she also stipulates that there has to be a page with 3 green triangles, 2 blue rectangles, and 1 red circle.

The associate understands the importance of the request, so he gets started right away. But he misremembers the details of the report, and convinces himself that his boss asked for 3 blue triangles, 2 green rectangles, and 1 red circle.

At 7:30am, the associate walks over to his analyst, who had been at the office until midnight the previous night, and tells her that the report is now her responsibility. He also says it has to be finished by no later than 9:30am. He makes sure to stress the importance of there being a page with 3 blue triangles, 2 green rectangles, and 1 red circle. The analyst gets started right away too, and without taking any breaks, manages to finish the report by 9:55am. The associate briefly scans the report, checks that the page meets the specification, and has 2 copies printed out for his manager’s meeting.

When the manager sits down, she takes a look at the report and doesn’t notice the mistake. But as they’re walking through the specifics of the deal, the client does, and is not impressed. They lose the sale, and their reputation is damaged.

Whose fault is it?

Interviewer: Well. It’s the fault of the manager for forgetting the meeting in the first place. It’s also the fault of the associate for misremembering the details. But the analyst is also to blame for not being able to get the report finished on time so that it could be proofread properly. The VP of Operations may also be to blame for the lack of organisation and structure that allowed for this problematic situation to arise, but that would depend on if this was something that happened regularly.

Candidate: Sure. Now assume that each person doesn’t have full information of the situation, and they are real people with real egos and reputations on the line. The last thing the manager remembers before seeing the final report is that she made it the responsibility of the associate at 6:45am. The last thing that the associate remembers before seeing the final report is giving the analyst instructions at 7:30am. The analyst knows she made the report to the exact specifications that the associate asked for.

Who does the manager blame? Who does the associate blame? Who does the analyst blame? Who does the VP blame?

Interviewer: The manager would mainly blame the associate for the error, though she would also put some of the blame on the analyst. The associate would blame the analyst, and would likely forget that they were the one who asked for the wrong colours in the first place. The analyst would blame the associate for messing up the instructions and for making it the analyst’s problem in the first place. The VP blames the manager.

Candidate: Where does the blame really lie?

Interviewer (bad answer): The associate should be blamed for misremembering the colours, and the analyst should be blamed for getting the report finished late.

Interviewer (good answer): The employees share the blame because they are a unit that works as a team. The important thing is that they identify the systemic problems that allowed for this mistake to occur, so that they can avoid it in the future. In this case, it may mean clearer communication and better organisation from the manager and VP.

Christopher Marks

The Curse of Credentialism

1 min read

"A world in which success means Rhodes/Teach for America/Goldman/McKinsey followed by Yale Law School/Harvard Business School followed by Blackstone/Bridgewater/Facebook is one in which too many talented, well-intentioned people follow the same path and end up doing the same few things. (Since I graduated from college a quarter-century ago, the only real additions to the hierarchy have been TFA and the technology behemoths.) In their famous paper, Kevin Murphy, Andrei Shleifer, and Robert Vishny found that countries with more engineering majors tend to grow faster and those with more law students tend to grow slower. A society in which smart, hard-working young people with generic ambitions tend to become hedge fund and private equity fund managers, management consultants, corporate lawyers, and strategists for technology monopolies is probably not one that is allocating talent effectively."

James Kwak, 'The Curse of Credentialism'

Christopher Marks

Contrails at sunset look like meteors

1 min read

I'd need a much better camera to capture how vivid and cool this looks:

 

Christopher Marks

4 Your Eyez Only

1 min read

Cannot wait for the new J. Cole album to be released.

Here's two tracks: False ProphetsEverybody Gotta Die

Christopher Marks