Ratnika Prasad

  • Student

Activity Feed

On November 14, 2018, Ratnika Prasad commented on Does Additive Manufacturing Pose a Threat to Gun Control? :

This is a subject that haunts me ever since the news of Defense Distributed 3D printing a gun came out in August. In my mind, this is a clear case of an area in America where not only is the current regulatory framework ill- equipped to handle this issue as you mentioned, but also where the institutional set-up will not lead to successful creation of new laws that actually regulate this in a rational manner. There are two key reasons for this. One, as we have seen with Amsterdam and Airbnb or San Francisco and Uber, is that forward looking regulation on technology requires the government to collaborate with the proponents of the technology to co-build this regulation. Only in this fashion do you get regulations that aren’t reactionary, that are sustainable and that have enough buy-in for all sides. In this case, given the highly politicized nature of gun regulation in America, it is unlikely that you’ll ever manage a successful partnership between regulators, gun proponents like the NRA and opponents like EveryTown for Safety ever come into the same room to talk about common sense future gun control. Second, the distributed nature of the internet has made existing evils like child pornography already impossible to tackle. No matter in how many spaces the government can take down distribution of 3D printed gun designs, the internet has a hydra like quality in that blocked content always finds its way to another part of the web. So I am extremely pessimistic about our ability to proactively react to this burgeoning concern in any constructive fashion.

On November 14, 2018, Ratnika Prasad commented on Can an algorithm replace “the pill”? :

Fantastic and informative article! It feels like Natural Cycles is a great example of the classic struggle we have seen with predictive models in general and machine learning in particular in class of Garbage In Garbage Out- in this case the fact that inaccurate data input dramatically reduces the efficacy of the product through not fault of its own, but rather because its predictions are then less effective in assisting people. While I don’t believe that tech necessarily needs to be held to a higher standard, I would argue that in the case of something that is a medical treatment of sorts and is highly reliant on the patient engaging in accurate or predictable behavior (similar to the idea of a pill needing to be taken daily)- the effectiveness rates advertised should not be the ones achieved in Randomized Control Trials, but rather the bear case of an average user whose information entries may be off. To me this represents a more accurate comparison then with alternate choices such as IUD which take away the element of human behavior. I really like the idea of wearables as a good solution to this human input issue- we know from behavioral economics that the less opt in choices you need a human to make, the more like a certain function will be carried out successfully.

On November 14, 2018, Ratnika Prasad commented on Turning Big Data into Clean Electrons at NextEra :

While I see the role of machine learning in increasing the accuracy of predicting where the demand is, historically energy generation has been an industry that is extremely vulnerable to regulatory concerns which also play a major role in the construction and distribution of energy related assets such as where a plant is built. I am curious whether machine learning will simply provide an additional point of intelligent input into what are ultimately human decisions or whether there is a way to incorporate regulation as an idiosyncratic variable into the predictive model.

On November 14, 2018, Ratnika Prasad commented on Window into the soul?: Machine learning for song recommendations at Spotify :

To the extent that machine learning helps detect trends in terms of musical patterns, genres, etc. that can be dissected by age/gender/other demographic characteristics, I wonder if there really is any natural edge for Spotify to go the route of Netflix to develop in house content. Unlike the film industry, one could argue there is inherently more convergence within different genres of music as well as distinct artist styles (e.g. techno music having similar beat duration and sounds or an artist like One Direction having the same style across songs). Given that large variations in data are necessary for a robust predictive model, does this hamper the ability of the algorithm to find truly significant correlations in a way that would feed into stronger predictive models? One could argue that we look for a lot more diversity in TV watching/ movies in that new content has to be different enough, while Spotify history reveals that humans will repeat musical content a lot more. Curious if this in any way reduces the statistical power of their models on an individual level.

I really worry about the question you raised regarding the line between balancing social responsibility and providing people news so curated that it compels them to use JT. With Facebook and Twitter there is already a major concern about the validation of motivated reasoning i.e. the behavior whereby people seek out facts to validate what they already believe in while discounting / disbelieving facts that don’t confirm to their worldview. There is a real danger in our society of people increasingly becoming concentrated in echo chambers that resonate with their belief and social media/news has a real role to play in helping to fix this problem. At the same time, one can argue that the only real role of a business is to maximize profit for their shareholders? At what point does some kind of ethical prerogative become more important than the fundamental job of a firm to be a profit maximizing entity? Should the government be regulating businesses like JT for the public good? Once you go down the slippery slope of “benevolently” inspired social diktats on business, it becomes hard to argue where the line for acceptable government interference really begins or ends. I truly don’t know what the right answer is, but am curious if similar concerns have been sparked in China about JT leading to people becoming less informed and simply more prone to develop hardline views, causing society to become more polarized.

While I see the application of machine learning through FLISR and STAR as a way to improve P&G’s performance through smart sensing and fixing faults (reducing variability) as well as rerouting (thereby adding a buffer of sorts), I am curious if there are predictive applications being used on a commercial scale. Has machine learning been applied to load forecasting? Moreover, are there any inherent challenges in the reliability of predictive models given that the model contains idiosyncratic shock variables such as variability in weather and regulatory changes? I imagine that the accuracy of predictive algorithms will improve with time but curious how they control for idiosyncratic variables in the logit or regression models.