This is a really interesting article and an even more interesting company. To answer your questions, I don’t think this should incorporate doctors and be more of a community applications. Yes, there is a risk of false information being spread, but I would imagine with so many stakeholders this wouldn’t spread too much. it would be no different than false information being spread between people in person. Second, I wouldn’t worry about falsely reported data. If this occurs, it would mostly be thrown out as spurious data on the scale that the data is being reported.
Legos are a product which doesn’t require or even encourage previous use before buying. Lego would be smart to move almost all of their product on-line to cut costs. I can envision a time in the future where a user can submit their own innovative idea and it can be produced solely for the client and anyone else who wishes to build a lego set based on that idea. Lastly, Lego could also push open innovation by not only creating new designs, but also creating new types of LEGO pieces, thus reinvigorating and envisioning the brand all together.
This is a really cool and genius use of additive manufacturing. My question for the company becomes what barriers to entry do they have? Why can’t a space company start doing this as well? Are there patents around the technology? Is this really hard to do from a logistical perspective and not worth competitors times? I look forward to the day when full space ships are created using AM in space!
Thanks for the read and enlightening me on the increased manufacturing and prototyping speeds of additive manufacturing. One thing I remain skeptical about is the breadth of the use case of additive manufacturing. Can this really be applied to other athletic wear? Can a tennis ball really be made through additive manufacturing? I remain skeptical but maybe a new type of ball will be created enhancing the Tennis game in general and the sport will evolve with the change? I am excited to see how this permeates society going forward.
I think you posed some incredibly interesting questions around responsibility. Will this, in the future, become the sole determinant of future help in avoiding mental health episodes and if so, who takes blame for missing an episode. My opinion would be that this would always have to act as a supplemental technology and never a sole indicator. That doesn’t mean however that it’s importance can’t grow, but it should not take over responsibility fully even if it gets better than humans at predicting such an episode.
Interesting article, but the real interesting question/s for me are around implications, not actual rollout timelines and priorities. When autonomous driving does become a thing and when an accident does occur, who becomes liable? Is it the company behind the AI systems? If they are off the hook, who then becomes responsible? Is Tesla required to make modifications based on past accidents?
The ethical implications of AI in life and death scenarios is the real interesting questions, but the articles does a good job of laying out some of the basic questions.