Thanks for the article — this is very interesting, and I learned a lot about Netflix! One question I am left with is whether the AI you point to can help solve the “always on” problem. What I mean is that regular TV still beats OTT solutions like Netflix for some customers because it is easy to turn on and just watch TV. There is the notion that more choice can create unhappiness. I wonder if the algorithms can help this by knowing what I like and then restricting my choices within it, ultimately just “playing” options for me a la a real TV channel.
Great article! The examples of error here are particularly striking in the world of security. In some ways, it is as though humans may be more forgiving of a human error than a computer error, because one is more relatable. As always, the answer may be a two-pronged system, though the issues of bias that you discuss are also a cause for concern. The question I am left with is whether security should be a leader or a follower: should security be where we test this security, or where it only goes once it is very proven?
Dear Fake Billy,
I agree this is a fascinating writeup, and definitely smarter than something Real Billy could create. I am scared about this and worry that the punitive approach may not be enough — then again, how else are other bad creations like this managed? Difficulty to access, social norms, and sort of “small” policing seems to be the way we avoid other perils. I also wonder if the best approach is to make sure these types of models cannot get to consumers easily — that there is still some barrier to entry that helps protect us.
This is a great writeup! Fraud is definitely a major issue, and innovation here directly saves hassle for customers and money for PayPal. This article is timely for me, because PayPal actually thought one of my accounts was fraudulent. I guess I am naturally a sketchy guy.
The question I have is whether the fraudsters themselves will continue to get smarter here and even use machine learning. At what point will we have computer vs. computer battling it out for the best fraud? Or, at a certain point, will machines be smart enough to not do evil?
Thanks for such a thought-provoking article. I absolutely love the idea of having the tie-ins with a calendar so the company can proactively provide you a new wardrobe. Wedding coming up? You’re set! Not in the fashion space, I wonder if this will also spur even more creativity so that no two people have the same dress. You could essentially guarantee “you are the only one in Chicago getting this dress this year” (for a price).
I also think their retail approach is interesting. In cities like Chicago, there are showrooms, so that the physical brand component still exists. Of course, the machine learning may further reduce the need, but completing the online / AI portions with good old fashioned retail may strengthen their position.
This is really interesting. As a “techy” guy, I take a lot of pride in telling folks that they suffer from the “Frequency illusion” that you mention — imagining the bots are listening in when they are.
The use case you bring up around whether facebook should switch on the mic is interesting. If they ave the remote capability to do so, I would imagine that it’s inevitable. Either facebook will do it on purpose, on accident, or via court order. My preference s of course the latter, but what happens when the computer is smart enough to just do it itself, with no permission? (Now we are in the middle of a sci-fi movie.)
The question then is whether you go more of the Apple approach and limit your ability to see / access the data? If the ability doesn’t exist, it can’t be exploited for good… or evil.