Hamdi's emphasis on kindness, community, inclusion, training, enfranchisement, and people-first principles are inspiring. It feels like the only right way to think about business:
1 min read
My list of TV shows to watch:
3 min read
I wanted to create a list of all the things I’ve read or seen that have made a long-lasting impression and influenced how I think. Here’s that list:
Stuff on the web:
I’m going to stop before I list every song I listen to.
The Tao of Pooh (Benjamin Hoff)
Little Brother (Cory Doctorow)
Homeland (Cory Doctorow)
Liar’s Poker (Michael Lewis)
The Big Short (Michael Lewis)
Flash Boys: A Wall Street Revolt (Michael Lewis)
Delivering Happiness (Tony Hsieh)
Zero to One (Peter Thiel)
The Hard Thing About Hard Things (Ben Horowitz)
Freakonomics & SuperFreakonomics (Stpehen J. Dubner & Steven Levit)
The Mystery of Capital (Hernando De Soto)
Capitalism and Freedom (Milton Friedman)
Nudge: Improving Decisions about Health, Wealth, and Happiness (Richard Thaler & Cass Sunstein)
Misbehaving: The Making of Behavioural Economics (Richard Thaler)
Flow: The Psychology of Optimal Experience (Mihaly Csikszentmihalyi)
Making Comics (Scott McCloud)
If I remember to, I’ll update this as new things inspire me. What would you add to the list?
1 min read
I pretty often think of the lines:
"And then one day you find
ten years have got behind you.
No one told you when to run,
you missed the starting gun."
from Pink Floyd's Time and imagine a 30-year old, nostalgic me unfulfilled with how things turned out. I use this to motivate myself to make a little bit of time everyday for doing small things like reading, playing the piano, or drawing. Things that require persistent effort. To use my time well, and to make sure I don't fritter and waste the hours in an offhand way.
I've listened to this song probably over a hundred times. But I realised recently that even though I'm only in my 20s, it applies going back in time as well. Ten years have gotten behind me since being 11. What's more, it's not like time's up once you're 30. Or 40, or 50, or 80 (though after that the odds of time being up does dramatically increase). Health-willing, you can always push yourself to learn and do new things.
Half a page of scribbed lines is how to start.
2 min read
London, United Kingdom— Emphatically stating how Blockchain technology is going to disrupt the way we buy cereal, 3rd year Economics student James Wilson can’t contain his excitement for how the latest tech media buzzword is going to radically transform the world. “I haven’t been this excited since I first learned about the Internet of People” Wilson said, “Can you imagine how cool that will be?”
“Have you heard about Ethereum?” Wilson asked uninterested passersby, “Think Blockchain 2.0, but with the power to replace lawyers and bankers. Through software!” When we asked Wilson about his thoughts on recent advancements in machine learning and artificial intelligence as heavily covered in publications like TechCrunch and Business Insider, he responded by saying “Dude don’t even get me started. That shit is so fucking cool!”
“I took a year out from uni to work at a FinTech / PropTech / TechDeck company, which was made even better by the tech media overstating the potential impact these kind of companies will have on society” Wilson said eagerly. At press time, Wilson was reportedly meeting with VC firms seeking investment in his new venture; a company that sells software as a service, as a service.
8 min read
Artificial intelligence and machine learning techniques have the potential to do a lot of good for the world. Extending beyond the achievements of DeepMind to beat the world's Go Champion in March last year, it is currently playing a role in improving the fairness of the insurance industry through companies like Lemonade, improving primary health care treatment through companies like Remedy, and making it possible to live in a future where we have self-driving cars, smart AI assistants, and highly-detailed personalised education for our kids. It is also being used in ways we don't necessarily notice or understand; curating our news feeds on Facebook, suggesting new music to us on Spotify, and profiling us for crimes we have yet to commit. On the flip side, AI researchers have been discussing the threat of the singularity, where AI could surpass humans as the most intellectually sophisticated entities on the planet. Regardless of its final applications, it is critical to encourage the union of the worlds of the computer scientists and machine learning experts and the humanities; specifically lawmakers, sociologists, psychologists, economists, philosophers, anthropologists, ethicists, and more. There needs to exist an interdisciplinary approach to creating regulatory frameworks so that we can ensure AI is leveraged to benefit humanity, rather than as a means of control.
Lucky for us, there's a bunch of brilliant people working on it
OpenAI was founded on the principle that AI should be advanced in a way to benefit humanity as a whole unconstrained by the need to generate financial return. The Ethics and Governance of Artificial Intelligence Fund is an attempt by the Knight Foundation, Reid Hoffman, Pierre Omidyar, the MIT Media Lab, and the Berkman Klein Center amongst others, to encourage transparent, cross-disciplinary research into how to best manage AI as well as understand its broad effects on humanity. Stanford is conducting a One Hundred Year Study on Artificial Intelligence (AI100), which is a long-term investigation of the field of AI and its influences on people, their communities, and society. AINow published a comprehensive report on the social and economic implications of artificial intelligence technologies in the near-term focused on the themes of healthcare, labour, inequality, and ethics. And there's more, which the Berkman Klein Center has compiled into a handy list on their website here.
But what are some of the main issues facing AI, and what role should academia and other institutions play in guiding a beneficial future?
Julia Bossmann, who is the President of the Foresight Institute, believes there are 9 top ethical issues in AI:
Urs Gasser, who is the Executive Director of the Berkman Klein Center, sees there being 5 roles that universities will play when it comes to the ethics and governance of AI:
He concludes by emphasising the importance of closing the divide between engineers and computer scientists and the humanities, social scientists, policymakers, and ethicists. He also underscores the role that universities will play in developing AI for public good:
“From the perspective of the university, the wave of AI that has washed over the globe has sparked great opportunities. More importantly, technological developments have underscored the responsibilities and indeed, idiosyncrasies, that endow universities with the unique ability to act as providers, conveners, translators, and integrators, to leverage artificial intelligence in the public interest and for the greater good.”
The Berkman Klein Center and MIT Media Lab have also jointly created a video series about the ethics and governance of AI which can be found here. Topics range from how we should ethically design AI systems that complement humanity, how AI could pose threats to civil liberties & democracy, pose developmental challenges for our kids, be injected into education and personalised learning, and how the development of AI will need to be open and have oversight.
There's a lot of work still yet to be done, and opening the dialogue between different fields of researchers, industry, and the government is a necessary step in the right direction.
1 min read
I finished the first (technically second if you include the optional exploratory analysis) project for the Udacity Machine Learning Engineer Nanodegree I'm enrolled in. It feels more like a mix between a comprehension test and an actual project, but either way I'm super stoked about it and the rest of the projects left in the course.
This project focused on building a model that could accurately predict housing prices in Boston, and came from the module about Model Evaluation and Validation: http://
Next up, Supervised Learning.
1 min read
Got on a busy tube and accidentally stepped on a guy's duffel bag. Apologised immediately. He says "what?" I say, "Sorry for stepping on your bag." He goes "that's okay."
"There's a valentine's present for my girlfriend in there."
I start feeling really bad.
He then pulls out a pink dildo in its packaging.
"Do you think she'll like it?"
"It cost me £50"
I laugh. He then pulls out two cans of beer from his duffel bag, cracks them both open, and says "Come on, drink with me"