I really enjoyed reading your post – super interesting. I also agree with David regarding the challenges with this model incorporating machine learning. My biggest struggle is that the perception of beauty is dynamic. For example, within the beauty industry, we see a shift from a bold and colorful look to a more natural and minimal one (hence the rise in Glossier). By the time machine learning picks up on one trend, consumers may move on shortly after.
I think the risks you lay out are extremely relevant, particularly the point around biases. For example, I can see the algorithm placing candidates into jobs purely based on one’s prior experiences and not actual talent given the time spent developing that particular skill. That being said, I think there is potential for this tool to be used as an initial screen but then used in tandem with an in-person interview to account for some of the potential bias as the funnel narrows.
I really enjoyed learning about Kroger’s machine learnings initiatives. In my mind, what they are doing is impressive and if executed properly going forward, can help them maintain their leading position within the grocers’ landscape. It is important for Kroger to find ways to upsell to consumers – they can use the coupons to attract consumers to stores based on prior purchases (i.e., repeat buys), and simultaneously introduce new products to consumers as they walk the aisle (TBD how this may be implemented, but would need to involve personalized recommendations based on prior purchases).
I really enjoyed reading your Glossier post from a machine learning standpoint as I had studied it from a crowdsourcing viewpoint. While machine learning would provide increased efficiencies as the company scales, I think there is definite risk in overdoing it and losing their connection with the consumer (its strongest asset). The solution in my mine is to limit machine learning and roll it out slowly (i.e., channel specific and only within the more automatable aspects of data analytics) rather than aim to completely automate its business model.
I think the questions you pose are very interesting. In my mind, the counterfeit risk is definitely of concern. If these machines are able to exactly replicate these products so consumers are not able to differentiate the difference between “real” and “fake,” then there are no incentives for them to spend more for a “real” product. I think the question here is what are the additional investments needed / how feasible is it to create new products with the current printer Chanel has today as it looks to expand both within and outside the beauty industry – a breakeven analysis could be helpful here.
As an avid Spotify user, I really enjoyed learning about how Spotify is using machine learning. To answer your second question, I think there is a delicate balance between delivering music that consumers are known to like hearing versus “new” music. Incorporating too much machine learning into the composition of music could potentially stifle artists’ creativity, which would prevent consumers from discovering new genres and songs that they otherwise might start liking. Another risk is changing consumer preferences with regard to music – I would consumers prefer different types of music as they age so machine learning based on prior preferences may not be as relevant on a go forward basis.
I also agree with the above comment that Stitch Fix’s algorithm is transferrable to other industries. I would argue that it may be more relevant for certain other sectors. For example, within the beauty space, we see high repeat purchases – consumers have very particular preferences and makeup intrinsically runs out, both resulting in high predictability of purchases. As consumers are shifting purchases from retail to the DTC channel, I can see machine learning as a differentiating factor and a way for brands to disrupt the traditional beauty industry.