While I appreciate the cultural struggles that are associated any dramatic change, I think it’s critical to consider both the changes to the R&D process and the potential coexistence of standard and open innovation in the context of NASA today and not in a vacuum simply considering the cultural frictional (that will accompany any change). Four contextual elements that I believe are important to consider when thinking about the potential R&D changes are i) the fact that NASA is a budget-constrained federal agency in the U.S., ii) the aerospace and, now, space travel industries are rapidly developing; iii) scientific progress has historically been achieved through collaborative vs. closed R&D approaches (including the partnerships noted by the author), and iv) NASA’s mission: “drive advances in science, technology, aeronautics, and space exploration to enhance knowledge, education, innovation, economic vitality and stewardship of Earth.” When considering these four ideas and thinking about this the successes from open innovation to date, such as the new approach to forecasting solar flares both with regard to accuracy and economics, I think it’s impossible to not read the tea leaves that change is needed with regard to the R&D processes. NASA must adapt. Without evolving, I don’t see how NASA will be able to stay truly committed to its mission. Forgoing the opportunities for more scrutinized, better, and faster solutions from open innovation seems counter to everything NASA stands for and thus points for whatever necessary cultural change might be required – as difficult as it must be – in order for NASA to remain relevant. I also believe there will be strong positive repercussions for NASA scientists as they are empowered with more information, brain power and overall resources. In short, the downsides are minimal relative to the potential upsides.
As a tax payer that has watched the decline of NASA’s budget as a percentage of the total federal budget and believes in the many benefits of the organization brings to society, the embrace of open innovation is a no brainer. The only question is how to deal with the short-term cultural friction that will undoubtedly exist in the near-term.
This is inspiring to read about and provides a glimmer of optimism for the all teachers, parents and students alike. Not only are the resources valuable for different use cases, but the inherent collaboration and sharing of ideas surely has compounding effects.
While it’s disappointing that TPT’s school-level purchasing program is only being used by ~2% of US schools, I agree with J.W.P and question whether such an “enterprise-level,” end-to-end solution is what TPT should strive for? The archaic world education procurement is slowly moving towards a new paradigm with eLearning and it seems that TPT is very good at solving the problem it is trying to solve: sharing materials, lesson plans and best practices among teachers. It seems that moving towards targeting school and procurement administrators would fundamentally shift the orientation of the platform and might create some unwanted consequences? However, despite the thought that TPT might need to be constrained to the sharing of information (similar to Wikipedia vs. paid published resources), I do believe that more can be done to help both i) compel teacher and school engagement and ii) help teachers pay for materials. To Nancy’s point, $500 out of pocket is absurd and I think any and all efforts should be made to help teacher’s supplement curricular materials with additional resources from the likes of TPT. Given the decentralization of schools, it seems difficult to obtain, but potential channels might include teacher’s associations and unions (and their relationships with different states / school systems) as well as local / state legislation? To me, momentum is all this TPT party needs to gain critical momentum, although it will take some time.
I greatly appreciate the application of ML to policing and helping departments “police smarter” (particularly within the context of budget constrained departments and personnel constrained shifts); however, thinking about how to implement these technologies, I think it’s extremely important to think about what they rely on: quality inputs. It’s unclear exactly what PredPol is using form a data standpoint, but, outside of using data such as historical incident reports (likely racially and geographically biased), I think it’s important to scrutinize what the algorithms are causally associating with crime. What are these correlations and do they pass the smell test with humans? The reason for my skepticism is if PredPol is simply a data aggregator or actually capable of producing the incredible societal benefits of predictive analytics and, if so, how?
Additionally, while I believe it’s smart to outsource functions such as routing to technology providers (with or without machine learning), I think it’s dangerous for departments to over-index to such solutions, at the expense of police engaging in traditional activities as part of the community and “walking the beat” from time-to-time. Communities are dynamic and it’s unclear what ML judgement edge might have on human judgement as things change and evolve with time. I could be totally wrong, but I worry about “outsourcing” too much to technology providers and having the result be a disengaged police force with a psychology that believes a machine knows what’s best. This shift is psychology is most worrisome.
Not sure if it’s a current input, but one really interesting input I might consider is the input of individuals from the community to both help paint the picture of context in the community as well as simply crowd-sourcing what is actually happening at any given point in time? The combination of human inputs + ML technology seems particularly dangerous and empowering for our public servants. I also think crowd-sourcing conditions on the ground might help not only predict criminal incidents, but also help prevent them by making the likes of PredPol a tool for everyone – not just the police.
Results count and, personally, it’s hard not to admire the record high 86.5% retention at UA last year. Even more, UA is a large public research university and, with a student body north of 35,000 undergrads surely faces resource constraints to both identify and assist students in need, along any time horizon – let alone the first 12 weeks. Furthermore, even if many students recognize their struggles, they may not appreciate the resources available, know how to seek them out or simply have the self-confidence to reach out on their own. In short, the need and demand seems significant, particularly for students in the absence of traditional / legacy support systems like family, friends and communities.
I sympathize with the critics of the system and remain highly skeptical of any unintended consequences of the data collection, particularly as it relates to the precedent it sets for both i) future data collection and ii) use-cases for data collection. I would also note that the system will only improve its efficacy as more data is collected and variables analyzed, which is clearly a slippery slope with regard to both the collection and application(s). Thus, UA must be careful and, while I believe that the collection of data on the nearly 800 factors can be extremely beneficial to students and the university itself, I believe strong barriers need to be put in place:
1) As an academic institution, UA must be explicit in each and every use case of the data. Personally, they should only use the data to see people “hit the screen,” much like a diabetic typically uses a glucose monitor: only when applicable.
2) Students must have the ability to opt out
3) Students must be informed what data is being tracked each quarter (email dissemination)
4) There is a Chinese wall established, so that few, if any, individuals have access to see both the data and individual’s identities at the same time. All anyone should be able to see is if someone “hits the screen” – with few exceptions
As an academic institution, focused on helping individuals learn, grow and development, this particular application of machine learning seems beneficial to all parties involved, but it must be pursued carefully.
I agree that additive manufacturing (“AM”) has the potential to greatly enhance current logistical frameworks to support both combat and non-combat missions (particularly as recent transitions in battlefield tactics require the operation of smaller, more distributed and more isolated units); however, I urge extreme caution at the notion that the Marine Corps “must effectively leverage additive manufacturing or risk losing the next war.” I am also hopeful that one day, AM will revolution both our military’s capabilities as well as the costs and time horizons associated with replacement parts, but I worry about the horse getting ahead of the cart. We need to be realistic about how to pursue the use of AM and be skeptical of the technology until proven otherwise. The lives of our children are potentially at risk.
To expand a bit further, AM is still a nascent technology and, as the author notes, the Marines must be sure that any AM applications have been sufficiently tested and certified with the likes of the FAA and ISO before deploying any additively manufactured mission critical components. In reality, this day may be in the near future or the distance future (if ever) and the Marines should not assume that the it will be achieved at all. This causal job is dangerous and should not be allowed to change our psychology / outlook. I agree, that it is appropriate to engage in the experimentation of 3-D printers in field environments, but one should not assume that this will causally lead to the replacement of existing supply chain infrastructures at any point in time. That will be a future calculation based on number of inputs, including costs, environments, and the conviction that any components with be able to meet the demanding field requirements in almost any circumstances. This will require both i) significant testing in a range of environments and over time and ii) the need for quality control testing in the field for finished products to help control for and eliminate any doubt about the quality, integrity, endurance, and variability of components.
Military leaders and policy makers should push for significant R&D in AM, but be careful to alter the belief that current supply chain models will be able to be easily disintermediated or replaced – partially or fully.
As a potential shareholder, the market cap erosion analysis is an interesting way to consider investments in AM and I do believe that TDG should invest in some AM capabilities (either through further M&A or organic expansion); however, I believe it should do so incrementally and does not need to make a $2 billion leap without a clear timeline or strategy as to what in the TDG supply chain and in what product categories AM will be focused. I also don’t necessarily agree with the causal logic: if the DoD sales are reduced by 10 or 15%, is that a reason to marginally increase the $2 billion investment proportionally? In a world with AM, things might change, including the economic model of TDG.
AM is still a nascent technology and despite serious potential in the aerospace supply chain, it has not yet economically viable against most known aircraft parts, produced at volume. GE has received FAA approval for select, geometrically complex pieces, but AM pieces have not replaced most (if any) core load-bearing structures to-date and the integrity of parts overtime is uncertain. TDG’s core competency is high performance proprietary parts, particularly in the aftermarket, and it should explore the potential to use AM to complement current methods and ensure product defensibility, but there is no reason to believe that AM will simply change everything and do so immediately, warranting a significant blank check investment.