3 min read
I wanted to create a list of all the things I’ve read or seen that have made a long-lasting impression and influenced how I think. Here’s that list:
Stuff on the web:
I’m going to stop before I list every song I listen to.
The Tao of Pooh (Benjamin Hoff)
Little Brother (Cory Doctorow)
Homeland (Cory Doctorow)
Liar’s Poker (Michael Lewis)
The Big Short (Michael Lewis)
Flash Boys: A Wall Street Revolt (Michael Lewis)
Delivering Happiness (Tony Hsieh)
Zero to One (Peter Thiel)
The Hard Thing About Hard Things (Ben Horowitz)
Freakonomics & SuperFreakonomics (Stpehen J. Dubner & Steven Levit)
The Mystery of Capital (Hernando De Soto)
Capitalism and Freedom (Milton Friedman)
Nudge: Improving Decisions about Health, Wealth, and Happiness (Richard Thaler & Cass Sunstein)
Misbehaving: The Making of Behavioural Economics (Richard Thaler)
Flow: The Psychology of Optimal Experience (Mihaly Csikszentmihalyi)
Making Comics (Scott McCloud)
If I remember to, I’ll update this as new things inspire me. What would you add to the list?
1 min read
I pretty often think of the lines:
"And then one day you find
ten years have got behind you.
No one told you when to run,
you missed the starting gun."
from Pink Floyd's Time and imagine a 30-year old, nostalgic me unfulfilled with how things turned out. I use this to motivate myself to make a little bit of time everyday for doing small things like reading, playing the piano, or drawing. Things that require persistent effort. To use my time well, and to make sure I don't fritter and waste the hours in an offhand way.
I've listened to this song probably over a hundred times. But I realised recently that even though I'm only in my 20s, it applies going back in time as well. Ten years have gotten behind me since being 11. What's more, it's not like time's up once you're 30. Or 40, or 50, or 80 (though after that the odds of time being up does dramatically increase). Health-willing, you can always push yourself to learn and do new things.
Half a page of scribbed lines is how to start.
2 min read
London, United Kingdom— Emphatically stating how Blockchain technology is going to disrupt the way we buy cereal, 3rd year Economics student James Wilson can’t contain his excitement for how the latest tech media buzzword is going to radically transform the world. “I haven’t been this excited since I first learned about the Internet of People” Wilson said, “Can you imagine how cool that will be?”
“Have you heard about Ethereum?” Wilson asked uninterested passersby, “Think Blockchain 2.0, but with the power to replace lawyers and bankers. Through software!” When we asked Wilson about his thoughts on recent advancements in machine learning and artificial intelligence as heavily covered in publications like TechCrunch and Business Insider, he responded by saying “Dude don’t even get me started. That shit is so fucking cool!”
“I took a year out from uni to work at a FinTech / PropTech / TechDeck company, which was made even better by the tech media overstating the potential impact these kind of companies will have on society” Wilson said eagerly. At press time, Wilson was reportedly meeting with VC firms seeking investment in his new venture; a company that sells software as a service, as a service.
8 min read
Artificial intelligence and machine learning techniques have the potential to do a lot of good for the world. Extending beyond the achievements of DeepMind to beat the world's Go Champion in March last year, it is currently playing a role in improving the fairness of the insurance industry through companies like Lemonade, improving primary health care treatment through companies like Remedy, and making it possible to live in a future where we have self-driving cars, smart AI assistants, and highly-detailed personalised education for our kids. It is also being used in ways we don't necessarily notice or understand; curating our news feeds on Facebook, suggesting new music to us on Spotify, and profiling us for crimes we have yet to commit. On the flip side, AI researchers have been discussing the threat of the singularity, where AI could surpass humans as the most intellectually sophisticated entities on the planet. Regardless of its final applications, it is critical to encourage the union of the worlds of the computer scientists and machine learning experts and the humanities; specifically lawmakers, sociologists, psychologists, economists, philosophers, anthropologists, ethicists, and more. There needs to exist an interdisciplinary approach to creating regulatory frameworks so that we can ensure AI is leveraged to benefit humanity, rather than as a means of control.
Lucky for us, there's a bunch of brilliant people working on it
OpenAI was founded on the principle that AI should be advanced in a way to benefit humanity as a whole unconstrained by the need to generate financial return. The Ethics and Governance of Artificial Intelligence Fund is an attempt by the Knight Foundation, Reid Hoffman, Pierre Omidyar, the MIT Media Lab, and the Berkman Klein Center amongst others, to encourage transparent, cross-disciplinary research into how to best manage AI as well as understand its broad effects on humanity. Stanford is conducting a One Hundred Year Study on Artificial Intelligence (AI100), which is a long-term investigation of the field of AI and its influences on people, their communities, and society. AINow published a comprehensive report on the social and economic implications of artificial intelligence technologies in the near-term focused on the themes of healthcare, labour, inequality, and ethics. And there's more, which the Berkman Klein Center has compiled into a handy list on their website here.
But what are some of the main issues facing AI, and what role should academia and other institutions play in guiding a beneficial future?
Julia Bossmann, who is the President of the Foresight Institute, believes there are 9 top ethical issues in AI:
Urs Gasser, who is the Executive Director of the Berkman Klein Center, sees there being 5 roles that universities will play when it comes to the ethics and governance of AI:
He concludes by emphasising the importance of closing the divide between engineers and computer scientists and the humanities, social scientists, policymakers, and ethicists. He also underscores the role that universities will play in developing AI for public good:
“From the perspective of the university, the wave of AI that has washed over the globe has sparked great opportunities. More importantly, technological developments have underscored the responsibilities and indeed, idiosyncrasies, that endow universities with the unique ability to act as providers, conveners, translators, and integrators, to leverage artificial intelligence in the public interest and for the greater good.”
The Berkman Klein Center and MIT Media Lab have also jointly created a video series about the ethics and governance of AI which can be found here. Topics range from how we should ethically design AI systems that complement humanity, how AI could pose threats to civil liberties & democracy, pose developmental challenges for our kids, be injected into education and personalised learning, and how the development of AI will need to be open and have oversight.
There's a lot of work still yet to be done, and opening the dialogue between different fields of researchers, industry, and the government is a necessary step in the right direction.
1 min read
I finished the first (technically second if you include the optional exploratory analysis) project for the Udacity Machine Learning Engineer Nanodegree I'm enrolled in. It feels more like a mix between a comprehension test and an actual project, but either way I'm super stoked about it and the rest of the projects left in the course.
This project focused on building a model that could accurately predict housing prices in Boston, and came from the module about Model Evaluation and Validation: http://
Next up, Supervised Learning.
1 min read
Got on a busy tube and accidentally stepped on a guy's duffel bag. Apologised immediately. He says "what?" I say, "Sorry for stepping on your bag." He goes "that's okay."
"There's a valentine's present for my girlfriend in there."
I start feeling really bad.
He then pulls out a pink dildo in its packaging.
"Do you think she'll like it?"
"It cost me £50"
I laugh. He then pulls out two cans of beer from his duffel bag, cracks them both open, and says "Come on, drink with me"
1 min read
Watching the Enron documentary called 'The Smartest Guys in The Room' and the above clip of Enron's own skit about 'hypothetical future value accounting' cracked me up so much.
edit: less funny realising PGE linemen put their entire 401Ks in Enron stock
4 min read
Interviewer: Well, that concludes this part of the interview process. Before we proceed, do you have any questions for us?
Candidate: Sure do.
Here’s a question to help me understand the culture of the company. A manager walks over to her associate, who’s brewing a coffee at 6:45am. She forgot the VP arranged a meeting with a major client, so she needs a revised report on her desk by 10am. The report would just need the most up to date financial data, but she also stipulates that there has to be a page with 3 green triangles, 2 blue rectangles, and 1 red circle.
The associate understands the importance of the request, so he gets started right away. But he misremembers the details of the report, and convinces himself that his boss asked for 3 blue triangles, 2 green rectangles, and 1 red circle.
At 7:30am, the associate walks over to his analyst, who had been at the office until midnight the previous night, and tells her that the report is now her responsibility. He also says it has to be finished by no later than 9:30am. He makes sure to stress the importance of there being a page with 3 blue triangles, 2 green rectangles, and 1 red circle. The analyst gets started right away too, and without taking any breaks, manages to finish the report by 9:55am. The associate briefly scans the report, checks that the page meets the specification, and has 2 copies printed out for his manager’s meeting.
When the manager sits down, she takes a look at the report and doesn’t notice the mistake. But as they’re walking through the specifics of the deal, the client does, and is not impressed. They lose the sale, and their reputation is damaged.
Whose fault is it?
Interviewer: Well. It’s the fault of the manager for forgetting the meeting in the first place. It’s also the fault of the associate for misremembering the details. But the analyst is also to blame for not being able to get the report finished on time so that it could be proofread properly. The VP of Operations may also be to blame for the lack of organisation and structure that allowed for this problematic situation to arise, but that would depend on if this was something that happened regularly.
Candidate: Sure. Now assume that each person doesn’t have full information of the situation, and they are real people with real egos and reputations on the line. The last thing the manager remembers before seeing the final report is that she made it the responsibility of the associate at 6:45am. The last thing that the associate remembers before seeing the final report is giving the analyst instructions at 7:30am. The analyst knows she made the report to the exact specifications that the associate asked for.
Who does the manager blame? Who does the associate blame? Who does the analyst blame? Who does the VP blame?
Interviewer: The manager would mainly blame the associate for the error, though she would also put some of the blame on the analyst. The associate would blame the analyst, and would likely forget that they were the one who asked for the wrong colours in the first place. The analyst would blame the associate for messing up the instructions and for making it the analyst’s problem in the first place. The VP blames the manager.
Candidate: Where does the blame really lie?
Interviewer (bad answer): The associate should be blamed for misremembering the colours, and the analyst should be blamed for getting the report finished late.
Interviewer (good answer): The employees share the blame because they are a unit that works as a team. The important thing is that they identify the systemic problems that allowed for this mistake to occur, so that they can avoid it in the future. In this case, it may mean clearer communication and better organisation from the manager and VP.