If Sherwin Williams didn’t have this open innovation model, I think they would have really painted themselves into a corner. As a scout evaluating any new technology, demonstrated proof of concept is going to be the most important addition to any innovation platform. In cases like these, the main questions for stakeholders in development will be production and technical feasibility. In biotech, we were often faced with the arduous task of repeating existing data to demonstrate proof of concept. Thankfully this too can be outsourced, and can speed up the evaluation process. I imagine that Sherwin Williams has to go through a similar process themselves, to validate new tech that comes in the door, and then decide for themselves whether a market for it really exists.
Very interesting insight into the casting and manufacturing process in aviation, before I read this I was flying blind. You make a very interesting point at the end regarding the potential of switching to metal additive manufacturing, which could recreate the high switching costs. Given the company’s already large investments, how will they balance the needs of today with the capex required to fund some of these (presumably) expensive manufacturing capabilities, and what is preventing larger airline manufacturers from bringing this kind of technology in house?
A very cool idea, that should be able to play a really interesting role in health care and delivery and how it compares to classic epidemiological collection techniques. I can see really interesting future roles for examining the spread of disease (e.g. flu or other infectious agents), as long as quality control can remain high. The big question will ultimately be user adoption, but in instances where the payment is fair, I can see this taking off – especially in underdeveloped communities.
Recursion is a really interesting startup, and I applaud what they are trying to do, especially with regards to drug repurposing, which is a desperately underserved area of research and one that could bring more therapies to market faster and leaner – especially for the underserved rare and neglected diseases. One issue I see with their discovery pipeline is the use of knockdown technology to create disease models of interest, in a limited number of cell lines, rather than knockout technology in more physiologically relevant cells. The issue that arises is that it is very difficult to tell, based on their imaging technology, whether they are faithfully recapitulating the disease of interest using this technology. If the disease state isn’t faithfully generated, or it isn’t produced in a cell that is physiologically relevant to that disease (e.g. using a skin cell instead of a nerve cell), the technology may end up generating needless false positives (compounds that appear to work, but actually don’t), and a number of false negatives (drugs that could be effective, but whose activities are missed in the assays). Buttressing their initial screening results with more physiologically-relevant, stem-cell derived or patient cells may produce higher quality data and lead to more clinical candidates or repurposed drugs.
Definitely an interesting combination of acquisitions in Flatiron and Foundation which will allow them to support more efficient clinical trials using segmented patient populations. By segmenting patient populations based on key genomic and phenotypic biomarkers, Roche can target those individuals who would benefit from a particular therapy, from those that wouldn’t. A classic example of patient segmentation based on a biomarker is for Herceptin (Genentech), which is used to treat breast cancer. Before treatment, patients are tested for the overexpression of HER2 in a breast cancer biopsy. Patients that don’t demonstrate overexpression of HER2 will not benefit from treatment, and treatment would only expose them to potential side effects. By utilizing key biomarker endpoints, Roche can only help themselves run leaner, more cost effective trials, and increase their chances for approval by only treating those patients where a response is anticipated.
Really interesting questions for big pharma. On the discovery side, computer aided drug design (CADD) has only played a small and supporting role in small molecule selection for preclinical development, but in the future hopefully AI will allow CADD to be more impactful. Main issues include lack of computing power to simulate the billions of molecular interactions between drug and target, and then we often don’t know the structure of the target, which is required for these simulations. Protein structure information has to be obtained using expensive and time-consuming techniques. In some cases it takes years to determine the structure of a protein, and even in those cases they are not always biologically relevant structures. In the future, researchers hope to be able to determine protein structure with the push of a mouse, but we are still years away.
One of the main issues with CADD is that once you have a target in mind, and can demonstrate that your molecule binds to that target, it is currently impossible to determine in silico whether that molecule will cause toxicity if it binds to additional “off-targets”. These off-targets effects can stop heart beats, or even inhibit essential liver enzymes that lead to systemic tox. Without protein structures for all of these vital proteins, numbering in the thousands, in silico CADD can only point us in the right direction for small molecule design. In the near term (10 years) there will definitely be a need to test these molecules in cells, animals, and eventually people to determine safety and efficacy.
A great distillation and highlight of key issues in the field. Is it ok for 23&me to commercialize their data? Absolutely, but with the caveat that patients who have submitted their samples consent to this commercialization, and in cases where it is warranted, these patients should be compensated.
There are a number of academic medical institutions that have already generated large “Biobanks” wherein patients can consent to having their blood and tissue samples de-identified, but through which academic researchers can then access medical records and genotype data to build correlations between genes and disease. The real trick for 23&me, in trying to commercialize an opportunity like that, will be to identify ways to access medical records or other phenotypic data that will allow them to start building these correlations. In all of these cases the more data the better, but there is also real risk here. In many instances, just because a patient has a particular mutation does not guarantee that a disease will ever manifest in that patient. If someone then makes drastic changes to their lifestyle or chooses an aggressive treatments for a disease that may never come to pass, it may do more harm than good. Like any other genetic sequencing and analysis services, responsible disclosure is paramount.
A very cool article! With regards to the liver transplant in mice that recapitulated circulating liver enzymes, I presume this was done in a SCID (severe combined immunodeficient) mouse, without the chance for organ rejection. It will be really interesting to see how the company scales liver transplants when the threat of organ rejection in a person is real. In the most costly scenario, they will need to custom manufacture a liver for each individual – that must be matched to prevent organ rejection, in the same way organ donors are currently matched. In an ideal scenario they will be able to create genetically modified transplantable liver tissue that reduces or eliminates the chance for rejection, thus creating a universal organ for donation. We are many years away from this, but if they can do that (manufacturing issues aside), they will be able to scale and generate material that will help a lot of people.