AP

  • Student

Activity Feed

On November 15, 2018, AP commented on Crowdsourcing at Adidas – Be The Designer :

I think the second question you posed is an interesting one. While the Speedfactory concept may enable Adidas to better serve regional markets, scaling the concept could certainly be a challenge. However, I think that where Adidas could truly benefit is not in scaling up the number/quality of Speedfactories, but in opening up a necessary number of Speedfactories in key regions for the sole purpose of collecting data on emerging preferences in footwear. That data would be invaluable, in that it could inform how Adidas shapes their prominent lines moving forward.

I think there is a good case to be made that the new APIs be left open and free to developers, versus limiting their access to paid users. As is the case in open-sourcing IBM Watson’s code to increase ‘out-of-the-box’ thinking, Capital One could benefit from the creativity of developers outside of the scope of their organization to drive future progress.

On November 14, 2018, AP commented on The tension between people and data at Netflix :

Very interesting article, and interesting questions to boot. I think the question of whether or not there is space for creatives to pursue untested show types in the greater environment of data-driven decision-making has no definitive answer. However, as of yet, machine learning is a largely reactive innovation. The kinds of algorithms that Netflix and others employ look at data in the past and attempt to overlay conclusions from that data onto the future of the options already in existence. For example, Netflix can look at the data and say, “because you liked show X, you should also like show Y,” but what Netflix can’t yet do is say that “because you liked show X, you might also like a show Z that does not currently exist.” That’s where the creatives still fit, in my opinion. Creatives can push boundaries and create content that users don’t yet know that they will like, and until machine learning can crack that code, both creatives and data-driven processes can co-exist.

On November 14, 2018, AP commented on Narrative Science, the Automated Journalism Startup :

Very interesting topic, and equally interesting questions. I think some of the fears you allude to are warranted, but are also natural. The introduction of new technology into pre-existing industries always causes growing pains in the labor market. Certain skills, and sometimes even entire professions, become obsolete. At the same time, however, new professions and demanded skillsets emerge to replace them, as human ingenuity is needed to integrate said new technology into old industry. Will paid writers or journalists be replaced at some point in the future? Perhaps, but I think the more difficult question to answer is whether or not this kind of new technology will be a net positive on the labor market as a whole as people are forced to adapt themselves to the new environment.

On November 14, 2018, AP commented on Highway to the danger zone? Machine learning at DHL :

Well written, and interesting questions. As with any industry, technological innovations in this space will theoretically remove the need for human workers to perform certain tasks. However, with the advent of said technological innovations, new roles should also emerge in capacities that support the integration of those innovations into pre-existing industries. For example, perhaps the use of machine learning may replace the need for junior analysts at certain financial institutions as algorithmic trading becomes more prevalent. Despite the fact that those jobs will no longer exist, new jobs -such as coders, developers, etc- will replace them. I don’t think the responsibility to accommodate for shifts in the job market fall on any single company, but on professionals to ensure they choose a learning path that keeps them ahead of the tech curve.

Great article, thanks for sharing. I think you alluded to how Facebook might address the potential concern of machine learning being used to manufacture, instead of detect, fake news. One of the advantages Facebook has in its war against fake news is its massive and massively diverse user base. Machine learning can be used to trawl through posts and detect content patterns indicative of fairness, and after that filter has been applied, Facebook can then solicit its users to evaluate flagged content to provide a level of nuance that their machine learning algorithms have not yet been able to achieve. Alternatively, Facebook could take the reverse approach and have users flag content to then be reviewed by its machine learning algorithm in an effort to speed the learning curve/accuracy of its fake news detection tool.

On November 14, 2018, AP commented on The long awaited tech disruption of the legal sector :

Geek Squad- thanks for the interesting take on machine learning in the legal context. I think you are correct in that applying machine learning to the relatively simple, repeatable, and time-intensive task of reviewing the due diligence paperwork during the initial phase of a legal proceeding is a solid use case. I also think you bring up an interesting point in the future composition of the legal workforce as a result of the integration of machine learning. Perhaps it will be a zero-sum game where coders are selected instead of junior lawyers at the entry level; or, perhaps a select number of developers can be integrated into a law firm to reduce the grunt work that these junior lawyers need to muddle through, thereby freeing them up to be allocated to developmental tasks typically reserved for more senior lawyers.

On November 14, 2018, AP commented on Fighting Fake News with AI :

Great article- a very interesting use case for AI. I think you pose a number of worthwhile questions to consider, especially regarding biases introduced by those creating the boundaries/parameters within which the AI operates to detect questionable content. I don’t think there are any easy answers to those questions, and the only way I can imagine addressing the point of bias in these AIs is to crowdsource the definition of questionable content to as many people as possible, over as many learning iterations as possible.