To me the question of 23&Me’s profit over customer data is more intriguing than the question of privacy. While companies’ profits used to be a function of their employees, it is now shifting to be driven by their customers (and the data they provide) — what right does a customer have to this data?
To the question of our humanity, it feels challenging to draw a line in the sand now. Have we not always had an inherent drive to take certain steps to optimize that our offspring will be healthy and successful? Who’s to say that modifying or own genome is really taking that much larger of a step? On top of all of that, who’s to say we aren’t all just in a simulation right now?
Boom goes the dynamite. Duolingo should definitely branch out into other areas of education besides language learning before their current tree burns to the ground. The same AI technologies that allow them to be successful are ones that are enabling people to communicate in ways that decreases the necessity of learning a new language.
To you point, what additional value can they create using the data they are already collecting — they should consider the aspects of progression of learning and cultural differences that might be able to be inferred from such a data set and conduct a market landscape (through BCG or otherwise) of which potential industries or companies might benefit from those inferences.
I agree with Allie that it makes strategic sense for Facebook to largely stay politically impartial (except in extreme circumstances when it believes it cannot compromise its values by not speaking out) since it is a mass market product.
Because of the inherent risk mentioned in this article about machine learning enabling bad hombres to create fake news just as much as it enables good hombres from stopping it, I wonder to what extent Facebook should publicly promise to be able to achieve the goal of stopping fake news (which it seems like they aren’t sure they’ll be able to do). I wonder what the other levers / pathways are in front of them — could they instead (or in addition) educate and empower their users to deduce fake news on their own?
Wow, what a great founding team! I’d push the team to think about what they are trying to solve for — is the point of income-sharing agreements not to improve access to education (in a way that is mutually beneficial to the recipient and an investor)? If so, by looking at predictors of which individuals have characteristics that may be negatively correlated with income, are you not likely to start excluding the majority of the individuals who would actually demand this type of product the most?
I worry about the sustainability of Duolingo’s model, given the advancements in the same AI technologies that are supporting the company’s success. Ultimately, progression in AI will lessen the demand for language learning as apps are able to quickly and easily translate something. I wonder how Duolingo can leverage a portfolio of products to ensure investors are receiving their required return.
To the open question of to what extent human interaction in learning can be replaced by machines — I think the answer is similar to what it is in other industries: we should identify the activities that machines will be able to perform better than humans (e.g., analyzing large data sets and spotting trends) and identify the activities that humans will have a distinct advantage in over machines (e.g., empathy, creativity).
In an education context, I think this means that the role of a teacher in a classroom (or in a portal in the case of online education) will evolve significantly to be less about delivering the academic material and more about managing the classroom and the holistic learning experiences of the students. Teachers are an example of a profession that likely won’t be “displaced” by advancements in artificial intelligence, but rather, will simply be disrupted in terms of what activities are the highest value add for them to perform rather than the machines.