Podcast: Pipelines & Science Budgets

By: Ian McLaughlin & Liana Vaccari




References:
Pipelines:
  1. https://primis.phmsa.dot.gov/comm/PipelineBasics.htm?nocache=2436
  2. https://primis.phmsa.dot.gov/comm/construction/index.htm?nocache=7276
  3. https://www.blm.gov/style/medialib/blm/wy/information/NEPA/cfodocs/greencore.Par.40103.File.dat/PODch3.pdf
  4. https://www.fractracker.org/2016/06/introduction-oil-gas-pipelines/
  5. https://motherboard.vice.com/en_us/article/dakota-access-pipeline-is-back-on-skipping-environmental-review
  6. https://www.fws.gov/Midwest/endangered/permits/hcp/nisource/2013NOA/pdf/NiSourceHCPfinalAppndxJ_HDD.pdf
  7. https://daplpipelinefacts.com/


Science Budget:
  1. https://www.nigms.nih.gov/Research/CRCB/IDeA/Pages/default.aspx
  2. https://www.statnews.com/2017/03/28/budget-nih-cuts/
  3. http://www.sciencemag.org/news/2017/03/trumps-first-budget-analysis-and-reaction
  4. https://www.axios.com/trump-proposal-would-slash-nih-funding-this-year-2333226759.html
  5. https://www.bloomberg.com/politics/articles/2017-03-28/white-house-proposes-large-cuts-to-nih-research-grants-this-year?cmpid=socialflow-twitter-business&utm_content=business&utm_campaign=socialflow-organic&utm_source=twitter&utm_medium=social
  6. https://l.facebook.com/l.php?u=http%3A%2F%2Fwww.eenews.net%2Fassets%2F2017%2F03%2F28%2Fdocument_gw_06.pdf&h=ATOf0STgDeQf-QyRHoEzIUykBbplzcFQG6O6Ik_kbVXRhTyQVcqoOQuqFj_WZSe8Bl_By5Ue6Eb-ScNORf8YzAAMvFSSHLtwSO-SgFQHpevQd5YTEgYQNZ4KN0yz_8BS56QF98Dv_OndsK_nPeY
  7. http://www.sciencemag.org/news/2017/03/playing-no-trump-aaas-policy-forum

UPenn Scientists Are Investigating Better Treatments for Sarcoma Tumors

by Adrian Rivera-Reyes and Koreana Pak

Soft tissue sarcomas (STS) are rare cancers of the connective tissues, such as bone, muscle, fat, and blood vessels. Soft and elastic, sarcoma tumors can push against their surroundings as they grow silent and undetected. Residing in an arm, torso, or thigh, it can take years before a sarcoma begins to cause pain. By the time a patient presents their tumor to a doctor, amputation may be unavoidable1.

In 2017, it is predicted that 12,390 Americans will be diagnosed with sarcoma, and approximately 5,000 patients will die from these tumors2. But the vast majority of these patients aren’t dying from the first tumor in their arm or leg—the real danger is metastasis, which is responsible for more than 90% of cancer-related deaths3-5.

Metastasis occurs when tumor cells leave their original site and colonize a new area of the body, such as the lungs, liver, or bones3-5. The current treatment options for sarcoma—surgery, chemotherapy, and radiation—are not very effective against metastases6,7. Only 10-25% of STS patients respond to chemotherapy, leaving surgery as the best option for many6,7. However, tumor cells can spread to other parts of the body even in early stages of sarcoma, long before the first tumor is even noticed. By the time the tumor is surgically removed, metastases have usually developed in other parts of the body.

As a sarcoma tumor grows, it becomes increasingly starved of oxygen and nutrients. Under these conditions, cancer cells are driven to metastasize. Moreover, tumor hypoxia, or low oxygen levels, are an important predictor of metastasis and low survival in sarcoma patients8-10. In other words, the more tumor hypoxia, the lower a patient’s chance of surviving.

But how does this actually work? How does hypoxia drive sarcoma cells out of a tumor and into other organs, such as the lungs? Surprisingly, UPenn scientists have found it has a lot to do with collagen11!

Metastasizing tumor cells (pink) associated with
collagen (blue). Image taken by Koreana Pak.
Collagen is the most abundant protein in the human body, but you’ll know it best as the substance that makes your skin flexible and elastic12. This elastic material has many uses, and you can find it in gelatin, marshmallows, surgical grafts—and hypoxic tumors. In STS tumors, the low oxygen levels cause collagen to form sticky, tangled fibers.  Sarcoma cells will actually hijack this disorganized collagen and use it as a “highway” over which they can migrate out of the tumor and into other organs11.

If these hypoxic collagen “highways” were disrupted in patient tumors, cancer cells could be prevented from metastasizing. But how?

In an effort to make this therapy a reality, UPenn scientists used models of human sarcoma and metastasis in which they could disrupt collagen. By deleting the hypoxia factors HIF-1 and PLOD2, they could restore normal collagen in tumors, which reduced tumor metastasis. Excitingly, they also found that minoxidil, a drug usually used to treat hair-loss, also reduced tumor collagen and halted metastasis11.

Whether minoxidil could be used for human patients is unclear; nevertheless, drugs that reduce hypoxic targets like PLOD2 could serve as promising anti-metastatic therapies.

In a follow up study, these scientists looked at another hypoxic factor, called HIF-213. While related to HIF-1, this protein actually plays a very different role in sarcoma. Elimination of HIF1 is important because it reduces metastasis11. But when it comes to primary sarcoma tumors, the expression of HIF-2 can help reduce cancer cell growth13.

Again using a model of human sarcoma, the authors found they could increase tumor size when they eliminated HIF-2. They also used a clinically approved drug, Vorinostat, to treat these tumors, and saw that HIF-2 increased and as a consequence the tumors to shrank13.

Sarcoma Treatment: Going Forward

The diversity of STS, which comprises about 50 different types1, as well as the low incidence of cases, makes it very challenging to develop better treatments for sarcoma. Clinical trials often combine patients with different types of sarcomas into a single study, even though the trial may not be a good fit for all the patients. A more specific approach is needed to treat the different types of sarcomas.

Through their research on hypoxia in sarcoma, UPenn scientists hope to improve current treatments. Their observation that HIF-1 and HIF-2 play opposing roles in different cancers is of particular importance, because HIF inhibitors are already being developed for cancer therapy11,13. Doctors can also use markers like HIF-2 to predict how well patients will respond to different treatments. For example, patients with tumors that have low levels of HIF-2 will respond well to treatments with Vorinostat. Unfortunately, such predictive markers are rare in STS, and the identification of additional markers should complement the development of new treatments.

Complementing standard chemotherapy with new sarcoma-specific therapies would greatly improve current treatment options. However, treating the primary tumor alone is not sufficient, as metastasis remains primarily responsible for patient death6,7. For this reason, further study into HIF-1/PLOD2 and the role of collagen in metastasis is needed. Through the development of drugs like minoxidil, which target harmful tumor collagen, we see exciting potential for the future of sarcoma therapy and patient survival.

References

1. Cancer.Net Editorial Board. (2012, June 25). Sarcoma, Soft Tissue – Introduction. Retrieved on April 4, 2017 from: http://www.cancer.net/cancer-types/sarcoma-soft-tissue/introduction

2. The American Cancer Society medical and editorial content team. (2017, January 6). What Are the Key Statistics About Soft Tissue Sarcomas? Retrieved on April 4, 2017 from https://www.cancer.org/cancer/soft-tissue-sarcoma/about/key-statistics.html

3. Mehlen, P., & Puisieux, A. (2006). Metastasis: a question of life or death. Nature Reviews Cancer, 6, 449-458.

4. Monteiro, J. & Fodde, R. (2010). Cancer stemness and metastasis: therapeutic consequences and perspectives. European Journal of Cancer, 46 (7), 1198-1203.

5. Nguyen, D.X., Bos, P.D., & Massagué, J. (2009). Metastasis: from dissemination to organ-specific colonization. Nature Reviews Cancer, 9, 274-284.

6. Linch, M., Miah, A. B., Thway, K., Judson, I. R., & Benson, C. (2014). Systemic treatment of soft-tissue sarcoma-gold standard and novel therapies. Nat. Rev. Clin. Oncol. 11(4), 187-202.

7. Lorigan, P., Verweij, J., Papai, Z., Rodenhuis, S., Le Cesne, A., Leahy, M.G., Radford, J.A., Van Glabbeke, M.M., Kirkpatrick, A., Hogendoom, P.C., & Blay, J.Y. (2007). Phase III trial of two investigational schedules of ifosfamide compared with standard-dose doxorubicin in advanced or metastaic soft tissue sarcoma: a European Organization for Research and Treatment of Cancer Soft Tissue and Bone Sarcoma Group Study. Journal of Clinical Oncology 25 (21), 3144-3150.

8. Shintani, K., Matsumine, A., Kusuzaki, K., Matsubara, T., Santonaka, H., Wakabayashi, T., Hoki, Y., & Uchida, A. (2006). Expression of hypoxia-inducible factor (HIF)-1 alpha as a biomarker of outcome in soft-tissue sarcoma. Virchows Arch. 449 (6), 673-681. 

9. Nordsmark, M., Alsner, J., Keller, J., Nielsen, O.S., Jensen, O.M., Horsman, M.R., & Overgaard, J. (2001). Hypoxia in human soft tissue sarcomas: adverse impact on survival and no association with p53 mutations. Br. J. Cancer 84 (8), 1070-1075. 

10. Rajendran, J.G., Wilson, D.C., Conrad, E.U., Peterson, L.M., Bruckner, J.D., Rasey, J.S., Chin, L.K., Hofstrand, P.D., Grierson, J.R., Eary, J.F., & Krohn, K.A. (2003). [(18)F]FMISO and [(18)F]FDG PET imaging in soft tissue sarcomas: correlation of hypoxia, metabolism, and VEGF expression. Eur. J. Nucl. Med. Mol. Imaging, 30 (5), 695-704.

11. Eisinger-Mathason, T.S.K., Zhang, M., Qiu, Q., Skuli, N., Nakazawa, M..S., Karakasheva, T., Mucaj, V., Shay, J.E., Stangenberg, L., Sadri, N., Puré, E., Yoon, S.S., Kirsch, D.G., & Simon, M.C. (2013). Hypoxia dependent modification of collagen networks promotes sarcoma metastasis. Cancer Discovery, 3 (10), 1190-1205.

12. What is collagen? Retrieved on April 4, 2017 from http://www.vitalproteins.com/what-is-collagen.

13. Nakazawa, M.S., Eisinger-Mathason, T.S., Sadri, N., Ochocki, J.D., Gade, T.P., Amin, R.K., & Simon, M.C. (2016). Epigenetic re-expression of HIF-2 alpha suppresses soft tissue sarcoma growth. Nature Communications, 7, 10539

Podcast: Contacting Congress, 21st Century Cures, & Antibiotics

By: Ian McLaughlin & Liana Vaccari






Antibiotic resistance, policy and prevention
By: Liana Vaccari

What are antibiotics? Antibiotics, which might also be called antimicrobial or antibacterial agents, are chemicals that can disrupt the life-cycle of bacteria in a few different ways; some actively kill the cells, others prevent them from reproducing, and others inhibit their ability to metabolize energy sources.  Over the years, they’ve been used for everything from strep throat to pneumonia1, but use has recently been dialed back because bacteria are becoming resistant to antibiotics that currently exist. One reason this has become an issue is that discoveries of new antibiotics can’t keep pace with the ability of the bacteria to resist old ones because developing new drugs is a long and expensive process.2,3
Early this year, a woman died of an infection caused by a strain of bacteria that none of the 26 antibiotics available in America could clear.4 This is pretty unusual and alarming – even if no one else is infected with the same exact strain here, the appearance of this superbug is a reminder that the antibiotics we have are vulnerable.
But how do the bacteria become resistant in the first place?3 When someone develops a bacterial infection, it’s because a strain of bacteria that can produce toxins that breakdown tissues or cause an immunological response that can be harmful (e.g. mucus build up) has set up camp in his or her body.5 The bacteria can then start doubling over relatively short times, quickly reaching thousands or millions of cells. When bacteria multiply, they basically copy genetic code from the parent cells to the next generation, but it isn’t always perfect. This is almost like when you tell stories from one generation to another, or play the game telephone. Little details get changed each time it’s told. When this happens in genes, this is called mutating, and no mutation is exactly the same from bacterium to bacterium. Mutations sometimes have no noticeable effect. Sometimes it changes in a way that renders the antibiotic less effective – maybe the cell wall is stronger and less likely to be damaged by that particular antibiotic, or maybe it had a slight change in the way the metabolism functions which is no longer as dependent on the process that was hindered by antibiotics. Some bacteria also resist death by antibiotics by producing biofilms that physically prevents antibiotics from reaching the cells. Others create enzymes that can break down the antibiotic, and others can even selectively pump out the antibiotic from inside the cell.
With thousands to millions of copies being made by the bacteria, there will inevitably be some mutants, even if not many, that are able to resist the effects of the antibiotics by one of those means. Sometimes, your infection might be cleared for the most part, with the help of both antibiotics and your own immune response, but if even a couple of these bacteria survive that were not affected by the antibiotics, now resistant to that drug, they can begin to multiply all over again until there are enough to cause a significant infection once more. You might not be as susceptible to this infection because your immune system has learned how to attack these in particular, but some of the bacteria will inevitably spread through the population and now be more resistant to the type of the antibiotic you used.
In the short term, it is to your benefit to use antibiotics right away when you have a bacterial infection. It could quickly clear your infection and return you to full health. But in doing so, you may have been host to bacteria evolving to resist that now will spread through the population, and the cheap, readily available antibiotic that you used is now less effective for other people. At some point, a doctor may have to prescribe stronger and stronger antibiotics to combat the continually mutating and proliferating bacteria which gives rise to the kind of superbug that killed the woman in Nevada. This is dangerous for you as well because in the long run, overreliance on antibiotics can make it harder for everyone to fight off infections.
This is such an issue that it has been made a priority on a national and global level. Both the World Health Organization6 and the Center for Disease Control7 are working to:
  • Prevent infections,
  • Increase awareness of antibiotic resistance and stewardship of antibiotics,
  • Track infections that do occur, and
  • Develop new drugs, diagnostics, and interventions
So what can you do to decrease the likelihood of antibiotic resistant bacteria in your own life?1-3,8-11
  • Antibiotics don’t work on viral infections which cause common colds, flu, and the typical sore throat. Strep throat is a bacterial infection that doctors test for when they swab your throat. The CDC believes that at least 30% of the time12, antibiotics are prescribed unnecessarily or inappropriately; people have used them for infections they may have been able to recover from without the drugs and in some cases, antibiotics are prescribed when they would have no effect. If you are going to take antibiotics, make sure what you have is actually a bacterial infection.
  •  If you do have a bacterial infection, take the full course of antibiotics that you are prescribed. The earlier the mutant bacteria are exposed to the antibiotics, the more likely they are to still respond, and the timeline and doses are carefully designed to kill as many as possible, even if your noticeable symptoms are gone. The earlier you stop taking the medication, the larger the population of bacteria that survives the first does.
  • On a day-to-day basis, don’t use antibacterial or antimicrobial soaps. According to the FDA, antibacterial soap is no more likely to prevent infections than regular soap, and the antibacterial additives also contribute to antibiotic resistance. This is not relevant with alcohol based hand sanitizers, but generally speaking, regularly washing your hands with normal soap and water is sufficient and effective at preventing infections otherwise spread by contact.
With these measures in mind, hopefully this appearance by a superbug in Nevada will be an isolated incident.

References:
1         Medical News Today, Antibiotics (2 January 2017), http://www.medicalnewstoday.com/articles/10278.php
3         Center for Disease Control, Drug Resistance (5 January 2017), https://www.cdc.gov/drugresistance/
4         Center for Disease Control Morbidity and Mortality Weekly Report (13 January 2017), https://www.cdc.gov/mmwr/volumes/66/wr/mm6601a7.htm?s_cid=mm6601a7_w
5         Medline Plus, Bacterial Infections (17 January 2017), https://medlineplus.gov/bacterialinfections.html
6         World Health Organization, Antimicrobial resistance (n-d), Retrieved on 15 December 2016, http://www.who.int/antimicrobial-resistance/global-action-plan/en/
7         Center for Disease Control, CDC Role (16 September 2013), https://www.cdc.gov/drugresistance/cdc_role.html
8         Medline Plus, Antibiotics (2 January 2017), https://medlineplus.gov/antibiotics.html
9         Center for Disease Control, Get Smart (14 November 2016), https://www.cdc.gov/features/getsmart/
11     Alliance for the Prudent Use of Antibiotics (n-d), Retrieved on 15 December 2016, http://emerald.tufts.edu/med/apua/about_issue/when_how.shtml
12     Center for Disease Control, Press Release (3 May 2016), https://www.cdc.gov/media/releases/2016/p0503-unnecessary-prescriptions.html

Event Recap: Intellectual Property Panel “From Research to Patent”


by Adrian Rivera-Reyes

On November 10th, the Penn Science Policy Group and the Penn Intellectual Property Group at Penn Law co-hosted a panel discussion focused on intellectual property and how to patent scientific research. The panel included Peter Cicala, Chief Patent Counsel at Celgene Corp.; Dr. Dora Mitchell, Director of the UPstart Program at the Penn Center for Innovation (PCI) Ventures; and Dr. Michael C. Milone, Assistant Professor of Pathology and Laboratory Medicine at the Hospital of the University of Pennsylvania (HUP), and Assistant Professor of Cell and Molecular Biology at Penn Medicine.

The event started with the introduction of both groups by their respective presidents and was proceeded by Kimberly Li giving an introduction of the panelists. Next, Peter gave a short PowerPoint presentation with a general introduction of intellectual property. Below are some key points to understand intellectual property/patent law 1,2:

1) In general, patents provide a “limited monopoly” that excludes others from making an invention, using, offering for sale, selling, or otherwise practicing an invention, but it does not confer upon the patentee a right to use the said invention. Thus, patents serve as a form of protection for the owner.
2) A single invention can only be patented once; once the patent on that invention expires, others may not file to patent the same invention again.
3) In order to confer a patent, the United States Patent and Trademark Office ensures that inventions of patentable subject matter meet the following legal requirements: i) inventions must be novel, ii) inventions must be useful, and iii) inventions must be non-obvious.
4) Utility patents only last for 20 years from the date of filing. After 20 years, anyone can make, use, offer for sale, sell, or practice the invention. A single invention cannot be re-patented after the time is done. In contrast, trademarks or trade secrets last forever, and copyrights last for the lifetime of the author.  
5) The United States Patent and Trademark Office follows the ‘first to file’ rule. Thus, the first person or entity to file a patent is the assumed owner.
6) Patents can be invalidated by the United States Patent and Trademark Office.

A clever example discussed by Peter Cicala was the patenting of a new car feature. If X company has submitted and received a patent for a car and Y company makes a new feature for the car, they can patent the new feature (as long as it meets the legal requirements introduced above). Once the patent for the new feature is conferred to Y company then they can produce that one feature, but not the car that was patented by X company, unless a license is provided by X company to Y company. Thus, the patent for Y company only gives them the power to prevent others from making that new feature.

Conferring Patents in the US and Internationally

First, there has to be an invention of some sort. Once there is an invention, a patent is filed. Patents are drafted free-hand, unlike a tax application where one has a specific form to fill. For patents, one has to start from scratch. Patents are usually long (some can reach 500 pages in length) and there are many legal requirements on what to say in the application and how to say it. Eventually, when one files a patent application it will go to the patent office. A patent examiner will, as the name suggests, examine it and deliberate with the patent office over the course of 3-5 years as they point out sections that need further editing, clarification, or justification. There is a lot of back and forth, until the examiner agrees that the invention has satisfied the patent requirements. Then, one pays fees and the patent is awarded. Fun fact: In the US, patents are granted only on Tuesdays.

On a global basis, one files a single international patent and the designated patent offices around the world examine it locally. If an office grants a patent, such patent will only be valid in that jurisdiction. That is why submitting patents cost so much, because one files and pays legal fees for each jurisdiction. For example, if a patent is filed in Japan for a compound, a different entity can manufacture the compound freely in the US, but not in Japan. This is one reason why companies and universities are very careful when filing patents.

Intellectual Property in Industry

Pharmaceutical products start with a great idea, but for every product in the market there are about 10,000 that fail. Therefore, companies file many patents even though many of those patents may not have any commercial value in 5-6 years. It costs about $500K to file (including filing and attorneys’ fees) and receive a single issued patent, which means companies spend a lot in patents (i.e. 10,000 patent submissions each worth $500K)! Out of those 10,000 patents, typically one will make the company about an estimated $5 billion a year in returns.

A student asked, “Is submitting a patent the same price for a university as it is for a company?” In essence, no! The patent office makes a distinction between large and small entities. Small entities, based on requirements provided by the patent office3, pay half the fees, but attorneys charge a fixed price. In the end, small entities save just a small percentage of money. Another question asked by an audience member was “what is patentable in the pharma business?” If one patents a molecule, no one else can infringe or use that molecule itself. That is how companies patent drugs or their associated components. One can also patent dosing regimens, formulations, modes of administration, etc. The compound claim gives the most protection, because it is very hard to make a knock-off of a molecule.

Intellectual Property in Academia

A student raised the issue that there is a lot of communication that occurs in science, especially at conferences, symposia, or amongst colleagues, classmates, etc. That seems to be a big risk in the context of protecting one's intellectual property, but doing so is an unavoidable risk when one does scientific research.

Dora, patent analyst from PCI Ventures, then proceeded to discuss the issues brought up from an academic perspective. She said, “The question raised here is that when one works in an academic institution the work is knowledge based and disseminated to others.... How does one draw the line from all that to protect something valuable?” What most, if not all, academic/research institution do is have their lawyers work very closely with faculty, so that anytime they are about to publish a paper, go to a conference, attend grand rounds, or any other such public appearance, the lawyers will hustle and get an application submitted before such events.

In addition to these more public forums, problems can arise from talking with friends who are not directly associated with the work. An example of this pertains to OPDIVO®, a drug patented by Ono Pharmaceuticals and the Kyoto University in the 90’s, which later was exclusively licensed to Bristol-Myers Squibb who launched the drug. Recently, Dana Farber Cancer Institute sued Ono Pharmaceuticals and Bristol-Myers Squibb because the principal investigator at Kyoto University had periodically consulted a colleague at Dana Farber for his advice. The professor-consultant at Dana Farber would send some data he thought was helpful and consult with them. Dana Farber sued both companies, claiming that the now-retired professor from its institution should be included as an inventor in the patent. Because an inventor of a patent is part-owner, Dana Farber is actually claiming ownership of the patent and will receive compensation from the sales of products under the patent4,5.

Michael, Penn Med professor who works intimately with a team of lawyers from PCI because he regularly files patents, said that balancing confidentiality with science communication is a difficult task. He commented, “I think it comes down to how important one thinks the invention is and a lot of the times the patent will not get developed if it will not bring any money to the owner (company/institution).” Moreover, there has to be a conversation with the university because the university pays for the patent, so it decides what to file. It also depends on the resources of the university. Regarding the work of graduate students or postdoctoral fellows, there are more considerations. Students and postdocs want and need to publish, go to conferences, and present their work in order to move forward with their careers; thus patents can be a rather limiting step for them.

From the industry perspective, Peter clarified that the rule at Celgene is that no one can talk about anything until the patent application is filed. Once the patent application is filed, employees are free to talk to whomever they wish without causing a situation like the one with Dana Farber and Bristol-Myers Squibb, since the patent application has been filed prior to any communication.

Thus, a clear difference between industry and academia is that in industry, things are kept under wraps and then a patent is filed, whereas in academia patents are filed early to make sure that the institution does not lose the rights of patenting by making the information public. Because universities file very early, there is a lot to deal with afterwards. The costs of prosecution are high, and sometimes the application does not make it through the full process, because universities cannot afford to throw $500K for an application if they are not confident on getting a return on the investment. The reason to file for some universities might be purely strategic.

Ownership vs. Inventorship

Another interesting topic discussed, was that of ownership vs. inventorship. There is the notion that ownership follows inventorship. In most cases, people do not file patents on their own; they work for companies or universities. Usually, an employment contract will state that if an employee invents something while employed by that entity, then ownership to a resultant patent will be assigned to the employer. Thus, the person is the inventor but not the owner of the patent; the entity is the owner. For academic research, the Bayh-Dole act was enacted to allow universities to own inventions that came from investigations funded by the federal government6. Dora explained that, “Government officials got together and agreed that they awarded so much money into research and good stuff came out of it, which the government would own but not file patents or do anything with it commercially."

A preliminary list of inventors is written when the patent is filed, but legally the inventors are the people that can point to a claim and say: "I thought of that one." Inventors have to swear under oath that they thought of a particular claim, and need to be able to present their notebooks with the data supporting a claim of inventorship. Inventors are undivided part-owners of the patent, which means that any inventor listed in the patent can license that patent in any way, without accounting for any of the other inventors. Additionally, there is a difference between the people that think about the claims and the people that actually execute the subject matter of the resulting claim. If a person is only executing experiments without contributing intellectually to the idea or procedure, then that person is not an inventor. For those in academic research, this often differs from how paper authorship is decided – usually performing an experiment is sufficient.

Summary

The discussion prompted the researchers in the room to be on the lookout for ideas they have that can result in patents, and to be careful when discussing data and results with people outside of their own research laboratory. Also, the discussion exposed key differences between intellectual property lawyers working for universities and industries, as opposed to law firms that have departments working on intellectual property. Ultimately, students felt they gained a basic understanding on how intellectual property works, the rules to file patents, and some intrinsic differences between academic and industry research.

References:

1) United States Patent and Trademark Office – (n.d.) Retrieved December 11, 2016 from https://www.uspto.gov/patents-getting-started/general-information-concerning-patents
2) BITLAW – (n.d.) Retrieved December 11, 2016 from http://www.bitlaw.com/patent/requirements.html
3) United States Patent and Trademark Office – (n.d.) Retrieved December 20, 2016 from https://www.uspto.gov/web/offices/pac/mpep/s2550.html
4) Bloomberg BNA – (2015, October 2) Retrieved December 11, 2016 FROM https://www.bna.com/dana-farber-says-n57982059025/
5) United States District Court (District Court of Massachusetts). http://www.dana-farber.org/uploadedFiles/Library/newsroom/news-releases/2015/dana-farber-inventorship-complaint.pdf
6) National Institute of Health, Office of Extramural Research – (2013, July 1) Retrieved December 11, 2016 from https://grants.nih.gov/grants/bayh-dole.htm

Event Recap: Anonymous Peer Review & PubPeer

by Ian McLaughlin 

On the 24th of October, the Penn Science Policy Group met to discuss the implications of a new mechanism by which individuals can essentially take part in the peer review process.  The group discussion focused on a particular platform, PubPeer.com, which emerged in 2012 and has since become a topic of interest and controversy among the scientific community.  In essence, PubPeer is an online forum that focuses on enabling post-publication commentary, which ranges from small concerns by motivated article readers, to deeper dives into the legitimacy of figures, data, and statistics in the publication.  Given the current state of the widely criticized peer-review process, we considered the advantages and disadvantages of democratizing the process with the added layer of anonymity applied to reviewers.

PubPeer has been involved in fostering investigations of several scandals in science.  Some examples include a critical evaluation of papers published in Nature 2014 entitled Stimulus-triggered fate conversion of somatic cells into pluripotency [1].  The paper described a novel mechanism by which pluripotency might be induced by manipulating the pH environments of somatic cells.  However, following publication, concerns regarding the scientific integrity of published experiments were raised, resulting in the retraction of both papers and an institutional investigation.
  
Subsequently, the publications of a prolific cancer researcher received attention on PubPeer, ultimately resulting in the rescission of a prestigious position at a new institution eleven days before the start date due, at least in part, to PubPeer commenters contacting faculty at the institution.  When trying to return the professor’s former position, it was no longer available.  The professor then sued PubPeer commenters, arguing that the site must identify the commenters that have prevented a continued career in science.  PubPeer, advised by lawyers from the ACLU working pro-bono, is refusing to comply – and enjoy the support of both Google and Twitter, both of which have filed a court brief in defense of the website [2]. 
                  
Arguably at its best, PubPeer ostensibly fulfills an unmet, or poorly-met, need in the science publication process.  Our discussion group felt that the goal of PubPeer is one that the peer review process is meant to pursue, but occasionally falls short of accomplishing. While increased vigilance is welcome, and bad science – or intentionally misleading figures – should certainly not be published, perhaps the popularity and activity on PubPeer reveals a correctable problem in the review process rather than a fundamental flaw. While the discussion group didn’t focus specifically on problems with the current peer review process – a topic deserving its own discussion [3] – the group felt that there were opportunities to improve the process, and was ambivalent that a platform like PubPeer is sufficiently moderated, vetted, and transparent in the right ways to be an optimal means to this end.
                  
Some ideas proposed by discussion participants were to make the peer-review process more transparent, with increased visibility applied to the reasons a manuscript is or is not published.  Additionally, peer-review often relies upon the input of just a handful of volunteer experts, all of whom are frequently under time constraints that can jeopardize their abilities to thoroughly evaluate manuscripts – occasionally resulting in the assignment of peer review to members of related, though not optimally relevant, fields [4].  Some discussion participants highlighted that a democratized review process, similar to that of PubPeer, may indeed alleviate some of these problems with the requirement that commenters be moderated to ensure they have relevant expertise.  Alternatively, some discussion participants argued, given the role of gate-keeper played by journals, often determining the career trajectories of aspiring scientists, the onus is on Journals’ editorial staffs to render peer review more effective.  Finally, another concept discussed was to layer a 3rd party moderation mechanism on top of a platform like PubPeer, ensuring comments are objective, constructive, and unbiased.
                  
The concept of a more open peer review is one that many scientists are beginning to seriously consider.  In Nature News, Ewen Callaway reported that 60% of the authors in Nature Communications agreed to have publication reviews published [7].  However, while a majority of responders to a survey funded by the European Commission believed that open peer review ought to become more routine, not all strategies of open peer review received equivalent support.

[7]

                  
Ultimately, the group unanimously felt that the popularity of PubPeer ought to be a signal to the scientific community that something is wrong with the publication process that requires our attention with potentially destructive ramifications [5].  Every time a significantly flawed article is published, damage is done to the perception of science and the scientific community, and at a time when the scientific community still enjoys broadly positive public perception [6], now is likely an opportune time to reconsider the peer-review process – and perhaps learn some lessons that an anonymous post-publication website like PubPeer might teach us.

References


1) PubPeer - Stimulus-triggered fate conversion of somatic cells into pluripotency. (n.d.). Retrieved November 25, 2016, from https://pubpeer.com/publications/8B755710BADFE6FB0A848A44B70F7D 

2) Brief of Amici Curiae Google Inc. and Twitter Inc. in Support of PubPeer, LLC. (Michigan Court of Appeals). https://pubpeer.com/Google_Twitter_Brief.pdf

3) Balietti, S. (2016). Science Is Suffering Because of Peer Review’s Big Problems. Retrieved November 25, 2016, from https://newrepublic.com/article/135921/science-suffering-peer-reviews-big-problems

4)Arns M. Open access is tiring out peer reviewers. Nature. 2014 Nov 27;515(7528):467. doi: 10.1038/515467a. PubMed PMID: 25428463.

5) Jha, Alok. (2012). False positives: Fraud and misconduct are threatening scientific research. Retrieved November 25, 2016, from https://www.theguardian.com/science/2012/sep/13/scientific-research-fraud-bad-practice

6) Hayden, E. C. (2015, January 29). Survey finds US public still supports science. Retrieved November 25, 2016, from http://www.nature.com/news/survey-finds-us-public-still-supports-science-1.16818 

7) Callaway E. Open peer review finds more takers. Nature. 2016 Nov 10;539(7629):343. doi: 10.1038/nature.2016.20969. PubMed PMID: 27853233

Tracing the ancestry and migration of HIV/AIDS in America

by Arpita Myles
Acquired immunodeficiency syndrome or AIDS is a global health problem that has terrified and intrigued scientists and laypeople alike for decades. AIDS is caused by the Human Immunodeficiency Virus, or HIV, which is transmitted through blood, semen, vaginal fluid, and from an infected mother to her child [1]. Infection leads to failure of the immune system, increasing susceptibility to secondary infections and cancer, which are mostly fatal. Considerable efforts are being put into developing prophylactic and therapeutic approaches to tackle HIV-AIDS, but there is also interest in understanding how the disease became so wide-spread. With the advent of the Ebola and Zika viruses in the last couple of years, there is a renewed urgency in understanding the emergence and spread of viruses in the past in order to prevent those in the future. The narrative surrounding the spread of HIV has been somewhat convoluted, but a new paper in Nature by Worobey et. al, hopes to set the record straight [2].
Humans are supposed to have acquired HIV from African chimpanzees- presumably as a result of hunters coming in contact with infected blood, containing a variant of the virus that had adapted to infect humans. The earliest known case of HIV in humans was detected in 1959 in Kinshasa, Democratic Republic of the Congo, but the specific mode of transmission was never ascertained [3].
There has been little or no information about how HIV spread to United States, until now. HIV incidences were first reported in the US in 1981, leading to the recognition of AIDS [4]. Since the virus can persist for a decade or more prior to manifestation of symptoms, it is possible that it arrived in the region long before 1981. However, since most samples from AIDS patients were collected after this date, efforts to establish a timeline for HIV’s entry into the states met with little success. Now, researchers have attempted to trace the spread of HIV by comparing genetic sequences of contemporary HIV strains with blood samples from HIV patients dating back to the late 1970’s [2]. These samples were initially collected for a study pertaining to Hepatitis B, but some were found to be HIV seropositive. This is the first comprehensive genetic study of the HIV virus in samples collected prior to 1981.
The technical accomplishment of this work is significant as well. In order to circumvent the problems of low amounts and extensive degradation of the viral RNA from the patient samples, they developed a technique they call “RNA jackhammering.”  In essence, a patient’s genome is broken down into small bits and overlapping sequences of viral RNA are amplified. This enables them to “piece together” the viral genome, which they can then subject to phylogenetic analysis.
Using novel statistical analysis methods, Worobey et al. reveal that the virus had probably entered New York from Africa (Haiti) during the 1970s, whereupon it spread to San Francisco and other regions. Upon analyzing the older samples, the researchers found that despite bearing similarities with the Caribbean strain, the strains from San Francisco and New York samples differed amongst themselves. This suggests that the virus had entered the US multiple, discreet times and then began circulating and mutating. Questions still remain regarding the route of transmission of the virus from Haiti to New York.
The relevance of this study is manifold. Based on the data, one can attempt to understand how pathogens spread from one population to another and how viruses mutate and evolve to escape natural immunity and engineered therapeutics. Their molecular and analytical techniques can be applied to other diseases and provide valuable information for clinicians and epidemiologists alike. Perhaps the most startling revelation of this study is that contemporary HIV strains are more closely related to their ancestors than to each other. This implies that information derived from ancestral strains could lead to development of successful vaccine strategies.
Beyond the clinic and research labs, there are societal lessons to be learned as well. Published in 1984, a study by CDC (Center for Disease Control) researcher William Darrow and colleagues traced the initial spread of HIV in the US to Gaétan Dugas- a French Canadian air steward. Examination of Dugas’s case provided evidence linking HIV transmission with sexual activity. Researchers labeled Dugas as “Patient O”, as in “Out of California” [5]. This was misinterpreted as “Patient Zero” by the media- a term still used in the context of other epidemics like flu and Ebola. The dark side of this story is that Dugas was demonized in the public domain as the one who brought HIV to the US. As our understanding of the disease and its spread broadened, scientists and historians began to discredit the notion that Dugas played a significant role. However, scientific facts were buried beneath layers of sensationalism and hearsay and the stigma remained.
Now, with the new information brought to light by Worobey’s group, Dugas’s name has been cleared. Phylogenetic analysis of Dugas’s strain of HIV was sufficiently different from the ancestral ones, negating the possibility that he initiated the epidemic.
The saga in its entirety highlights the moral dilemma of epidemiological studies and the extent to which the findings should be made public. Biological systems are complicated, and while narrowing down origin of a disease has significance clinical relevance, we often fail to consider collateral damage. The tale of tracking the spread of HIV is a cautionary one; scientific and social efforts should be focused more on resolution and management, rather than on vilifying unsuspecting individuals for “causing” an outbreak.

References:
1. Maartens G, Celum C, Lewin SR. HIV infection: epidemiology, pathogenesis, treatment, and prevention. Lancet. 2014 Jul 19;384(9939):258-71.
2. Worobey M, Watts TD, McKay RA et al., 1970s and 'Patient 0' HIV-1 genomes illuminate early HIV/AIDS history in North America. Nature. 2016 Oct 26. doi: 10.1038/nature19827.
3. Faria NR, Rambaut A et al., HIV epidemiology. The early spread and epidemic ignition of HIV-1 in human populations. Science. 2014 Oct 3;346(6205):56-61.
4. Centers for Disease Control (CDC). Pneumocystis pneumonia--Los Angeles. MMWR Morb Mortal Wkly Rep. 1981 Jun 5;30(21):250-2.
5. McKay RA. “Patient Zero”: The Absence of a Patient’s View of the Early North American AIDS Epidemic. Bull Hist Med. 2014 Spring: 161-194.

Event Recap: The Importance of Science-Informed Policy & Law Making

by Ian McLaughlin          

Last week, we held a panel discussion focused on the importance of science-informed policy & law making.  The panel included Dr. Michael Mann, a climatologist and geophysicist at Pennsylvania State University who recently wrote The Madhouse Effect: How Climate Change Denial is Threatening Our Planet, Destroying Our Politics, and Driving Us Crazy.   Dr. Andrew Zwicker, a member of the New Jersey General Assembly and a physicist who heads the Science Education Department of the Princeton Plasma Physics Laboratory, joined him.  Finally, Shaughnessy Naughton, a chemist and entrepreneur who ran for congressional office in Pennsylvania and founded the 314 PAC, which promotes the election of candidates with backgrounds in STEM fields to public office, joined the panel as well.

The event began with personal introductions, with each member characterizing their unique perspectives and personal histories.  Shaughnessy Naughton highlighted the scarcity of legislators with backgrounds in math and science as a primary motivator for encouraging people with science backgrounds to get involved beyond just advocacy. 

Dr. Andrew Zwicker, having previously run for office in the US House of Representatives, ultimately was successful in his run for the state assembly in an extremely tight race, winning by just 78 votes, or 0.2456%  – a level of precision that he’s been told would only be spoken by a scientist, as most would simplify the value to a quarter of a percent.  He credited two primary features of his campaign as contributing to his success.  First, on a practical level, he utilized a more sophisticated voter model.  As the first Democrat ever elected to his district in its 42 years[1], it was critical to optimally allocate resources to effectively communicate his message.  Second, he identified his background in science as a strength.  When campaigning, he made it clear that he’d ensure facts would guide his decisions – and his constituents found that pragmatism appealing.

Next, Dr. Michael Mann summarized his pathway to prominence in the climate change debate by recounting the political fallout that occurred following the publication of his now famous “hockey-stick graph”[2].  In short, the graph depicts that average global temperatures had been fairly stable until 1900 (forming the shaft of the hockey stick), at which point a sharp rise in temperature begins (forming the blade).  In articulating why exactly this publication made such a splash, he highlighted the simplicity of the graph. It summarizes what is otherwise fairly esoteric data in a way that’s accessible to non-scientists.  “You don’t have to understand the complex physics to understand what the graph was saying: there’s something unprecedented taking place today, and, by implication, probably has something to do with what we’re doing.”  After its publication, he was in for a whirlwind.  The graph became iconic in the climate change debate, provoking the ire of special interests who then pursued a strategy to personally discredit Mann.

Naughton initiated the conversation by asking Zwicker if his background in science has influenced what he’s been able to accomplish in his past 9 months of public office.  While at times it has given him credibility and garnered trust among his peers and constituents, the nature of science is often incongruous with politics: rather than relying solely on facts, politics requires emotional and personal appeals to get things done.  A specific example: the fear of jobs being lost due to legislation, particularly reforms focused on energy and climate change, oftentimes obscures what would otherwise be a less volatile debate.

Naughton then asked Mann to describe his experience with Ken Cuccinelli, the former Attorney General (AG) of Virginia under former governor Bob McDonnell.  One of the former AG’s priorities was to target the Environmental Protection Agency’s ability to regulate greenhouse gas emissions, as well as demand the University of Virginia – the institution where Dr. Mann had been an assistant professor from 1999 to 2005 – to provide a sweeping compilation of documents associated with Dr. Mann.  Cuccinelli was relying upon the 2002 Virginia Fraud Against Taxpayers Act, devised to enable the AG to ferret out state waste and fraud, to serve the civil investigative demand.  Ultimately, Cuccinelli’s case was rejected, and has since been considered a major victory to the integrity of academic research and scientists’ privacy.

The panel then invited questions from attendees, which ranged from technical inquiries of how climate estimates were made for the Hockey Stick Curve to perspectives on policy & science communication. 

One question focused on the public’s ability to digest and think critically about scientific knowledge – highlighting that organizations and institutions like AAAS and the NSF regularly require funded investigators to spend time communicating their research to a broader audience.  However, the relationship between the public and science remains tenuous.  Zwicker responded by identifying a critical difference in efficacy between the beautiful images and data from NASA or press releases and the personal experiences of people outside of science.  Special interest groups can disseminate opinions and perspectives that don’t comport with the scientific consensus, and without truly effective science communication, the public simply can’t know whom to trust.  He argued that scientists do remain a broadly trusted group, but without competent efforts to communicate the best science, it remains a major challenge.  Ultimately, the solution involves a focus on early education and teaching critical thinking skills.

Moreover, Mann commented on a problematic fallacy that arises from a misunderstanding of how science works: “there’s a fallacy that because we don’t know something, we know nothing.  And that’s obviously incorrect.” There are many issues at the forefront of science that remain to be understood, but that forefront exists because of relevant established knowledge.  “We know greenhouse gasses warm the planet, and it’ll warm more if we continue burning carbon.  There’s still uncertainty with gravity.  We haven’t reconciled quantum mechanics with general relativity.  Just because we haven’t reconciled all of the forces, and there’s still something to be learned about gravity at certain scales – we still understand that if we jump out the window, we’ll plummet to our deaths.”

Naughton suggested that much of this disconnect between scientific knowledge and public sentiment comes down to communication.  “For many scientists, it’s very difficult to communicate very complex processes and theories in a language that people can understand.  As scientists, you want to be truthful and honest.  You don’t learn everything about quantum mechanics in your first year of physics; by not explaining everything, that doesn’t mean you’re being dishonest.” 

Zwicker highlighted that there aren’t many prominent science communicators, asking the audience to name as many as they could.  Then, he asked if we could name prominent female science communicators, which proved more difficult for the audience.  There isn’t necessarily a simple solution to this obvious problem, given the influence of special interests and concerns of profitability.

An audience member then asked whether the panelists considered nuclear energy a viable alternative – and, in particular “warehouse-ready nuclear”, which describes small modular reactors that operate on a much smaller scale than the massive reactors to which we’ve become accustomed.  Zwicker, as a physicist, expressed skepticism: “You’ll notice there are no small reactors anywhere in the world.  By the time you build a reactor and get through the regulation – and we’re talking 10-30 years to be completed – we’re still far away from them being economically viable.”  He also noted that he’s encountered the argument that investment allocation matters to the success of a given technology, and that investment in one sustainable energy platform may delay progress in others.  The audience then asked about the panel’s perspectives on natural gas, which is characterized by some as a bridge fuel to a lower carbon-emitting future energy source.  Summarizing his perspective on natural gas, Mann argued “a fossil fuel ultimately can’t be the solution to a problem caused by fossil fuels.”

Jamie DeNizio, a member of PSPG, asked if the panel thought coalitions between state and local governments could be an effective strategy to get around current barriers at the national level.  Naughton noted that this is ultimately the goal behind the federal Clean Power Plan, with goals tailored to specific states for cutting carbon output.  Mann, highlighting the prevalent lack of acceptance of climate change at the federal level, suggested that the examples of state consortia that currently exist – like The Regional Greenhouse Gas Initiative (RGGI) in New England, or the Pacific Coast Collaborative (PCC) on the West Coast – are causes for optimism, indicating that progress can be made despite gridlock at the federal level.  Zwicker noted that New Jersey’s participation in trading carbon credits had resulted in substantial revenue, as New Jersey was able to bring in funds to build a new hospital.  He suggested that Governor Chris Christie’s decision to withdraw from RGGI was imprudent, and the New York Times noted that, in 2011, New Jersey had received over $100 million in revenue from RGGI[3].

Another issue that was brought up by the panel was how counterproductive infighting among environmentalists and climate change activists can be to the overall effort.  In particular, this splintering enables critics to portray climate change as broadly incoherent, rendering the data and proposals less convincing to skeptics of anthropogenic climate change.

Adrian Rivera, also a PSPG member, asked the panel to comment on whether they felt social media is an effective strategy to communicate science to the general public.  Mann stated that scientist that do not engage on social media are not being as effective as they can be, mostly because there is a growing subset of the population that derives information via social media platforms. In contrast, Zwicker highlighted the lack of depth on social media, and that some issues simply require more in-depth discussion than social media tends to accommodate. Importantly, Zwicker emphasized the importance and value of face-to-face communication. Naughton then brought this point to a specific example of poor science communication translating into tangible problems.  “It’s not all about policy or NIH/NSF funding.  It’s about making sure evolution is being taught in public schools.”  She noted the experience of a botany professor in Susquehanna, PA, who was holding an info-session on biology for high-school teachers. One of the attending high-school teachers told him that he was brave for teaching evolution in school, which Naughton identified as an example of ineffective science communication.

Finally, an environmental activist in the audience noted that a major problem he’d observed in his own approach to advocacy was that he was often speaking through feelings of anger rather than positive terms.  Mann thoroughly agreed, and noted that “there’s a danger when we approach from doom and gloom.  This takes us to the wrong place; it becomes an excuse for inaction, and it actually has been co-opted by the forces of denial.  It is important to communicate that there is urgency in confronting this problem [climate change] – but that we can do it, and have a more prosperous planet for our children and grandchildren.  It’s critical to communicate that.  If you don’t provide a path forward, you’re leading people in the wrong direction.”

The event was co-hosted by 314 Action, a non-profit affiliated with 314 PAC with the goal of strengthening communication among the STEM community, the public, and elected officials.


References:

1. Qian, K. (2015, November 11). Zwicker elected as first Democrat in NJ 16th district. Retrieved October 6, 2016, from http://dailyprincetonian.com/news/2015/11/zwicker-elected-as-first-democrat-in-nj-16th-district/

2. Mann, Michael E.; Bradley, Raymond S.; Hughes, Malcolm K. (1999), "Northern hemisphere temperatures during the past millennium: Inferences, uncertainties, and limitations" (PDF), Geophysical Research Letters, 26 (6): 759–762, Bibcode:1999GeoRL..26..759M, doi:10.1029/1999GL900070

3. Navarro, M. (2011, May 26). Christie Pulls New Jersey From 10-State Climate Initiative. Retrieved October 6, 2016, from http://www.nytimes.com/2011/05/27/nyregion/christie-pulls-nj-from-greenhouse-gas-coalition.html?_r=1&ref=nyregion

New Research shows how to make Human Stem Cell Lines divide equally

by Amaris Castanon
For the first time, scientists have generated haploid embryonic stem (ES) cell lines in humans, as published in Nature. This could lead to novel cell therapies for genetic diseases – even color blindness (Benvenisty et al., 2016)
The study was performed by scientists from the Hebrew University of Jerusalem(Israel) in collaboration with Columbia University Medical Center (CUMC) and the New York Stem Cell Foundation (NYSCF).
The newly derived pluripotent, human ES cell lines demonstrated their ability to ‘self-renew’ while maintaining a normal haploid karyotype (i.e. without chromosomal breakdown after each generation) (Benvenisty et al., 2016).
While gamete manipulation in other mammalian species has yielded several ES cell lines (Yang, H. et al., Leeb, M. & Wutz, A.), this is the first study to report human cells capable of cell division with merely one copy of the parent’s cell genome (Benvenisty et al., 2016).
The genetic match between the stem cells and the egg donor may prove advantageous for cell-based therapies of genetic diseases such as diabetes, Tay-Sachs disease and even color blindness (Elling et al., 2011).
Mammalian cells are considered diploid due to the fact that two sets of chromosomes are inherited: 23 from the father and 23 from the mother (a total of 46) (Wutz, 2014; Yang H. et al., 2013). Haploid cells contain a single set of 23 chromosomes and arise only as post-meiotic germ cells (egg and sperm) to ensure the right number of chromosomes end up in the zygote (embryo) (Li et al., 2014; Elling et al., 2011).
Other studies performed in an effort to generate ES cells from human egg cells reported generating solely diploid (46 chromosome) human stem cells, which is a problem (Leeb, M. et al., 2012; Takahashi, S. et al., 2014). This study, however, reported inducing cell division in unfertilized human egg cells (Benvenisty et al., 2016).
The DNA was labeled with a florescent dye prior to isolating the haploid stem cells and scattering (the haploid cells or the cells) among the larger pool of diploid cells. The DNA staining demonstrated that the haploid cells retained their single set of chromosomes, while differentiating to other cell types including nerve, heart, and pancreatic cells demonstrates their ability to give rise to cells of different lineage (pluripotency) (Benvenisty et al., 2016).
Indeed, the newly derived haploid ES cells demonstrated pluripotent stem cell characteristics, such as self-renewal capacity and a pluripotency-specific molecular signature (Benvenisty et al., 2016).
In addition, the group of researchers successfully demonstrated usage of their newly derived human ES cells as a platform for loss-of-function genetic screening. Therefore, elucidating the genetic screening potential of targeting only one of the two copies of a gene.
These findings may facilitate genetic analysis in the future by allowing an ease of gene editing in cancer research and regenerative medicine.
This is a significant finding in haploid cells, due to the fact that detecting the biological effects of a single-copy mutation in a diploid cell is difficult. The second copy does not contain the mutation and therefore serves as a ‘backup’ set of genes, making it a challenge for precise detection.
The newly derived haploid ES cells will provide researchers with a valuable tool for improving our understanding of human development and genetic diseases.
This study has provided scientists with a new type of human stem cell that will play an important role in human functional genomics and regenerative medicine.
References:
Derivation and differentiation of haploid human embryonic stem cells. Sagi I, Chia G, Golan-Lev T, Peretz M, Weissbein U, Sui L, Sauer MV, Yanuka O, Egli D, Benvenisty N. Nature. 2016 Apr 7;532(7597):107-11.

Elling, U. et al. Forward and reverse genetics through derivation of haploid mouse embryonic stem cells. Cell Stem Cell 9, 563–574 (2011).

Leeb, M. et al. Germline potential of parthenogenetic haploid mouse embryonic stem cells. Development 139, 3301–3305 (2012)

Leeb, M. & Wutz, A. Derivation of haploid embryonic stem cells from mouse embryos.Nature 479, 131–134 (2011)

Li, W. et al. Genetic modification and screening in rat using haploid embryonic stem cells. Cell Stem Cell 14, 404–414 (2014).

Takahashi, S. et al. Induction of the G2/M transition stabilizes haploid embryonic stem cells. Development 141, 3842–3847 (2014)

Wutz, A. Haploid mouse embryonic stem cells: rapid genetic screening and germline transmission. Annu. Rev. Cell Dev. Biol. 30, 705–722 (2014).

Yang, H. et al. Generation of genetically modified mice by oocyte injection of androgenetic haploid embryonic stem cells. Cell 149, 605–617 (2012)

Event Recap: Dr. Sarah Rhodes, Health Science Policy Analyst

by Chris Yarosh

PSPG tries to hold as many events as limited time and funding permit, but we cannot bring in enough speakers to cover the range of science policy careers out there. Luckily, other groups at Penn hold fantastic events, too, and this week’s Biomedical Postdoc Program Career Workshop was no exception. While all of the speakers provided great insights into their fields, this recap focuses on Dr. Sarah Rhodes, a Health Science Policy Analyst in the Office of Science Policy (OSP) at the National Institutes of Health (NIH).

First, some background: Sarah earned her Ph.D. in Neuroscience from Cardiff University in the U.K., and served as a postdoc there before moving across the pond and joining a lab at the NIH. To test the policy waters, Sarah took advantage of NIH’s intramural detail program, which allows scientists to do temporary stints in administrative offices. For her detail, Sarah worked as a Policy Analyst in the Office of Autism Research Coordination (OARC) at the National Institute of Mental Health (NIMH). That experience convinced her to pursue policy full time. Following some immigration-related delays, Sarah joined OARC as a contractor and later became a permanent NIH employee.

After outlining her career path, Sarah provided an overview of how science policy works in the U.S. federal government, breaking the field broadly into three categories: policy for science, science for policy, and science diplomacy. According to Sarah (and as originally promulgated by Dr. Diane Hannemann, another one of this event’s panelists), the focus of different agencies roughly breaks down as follows:


This makes a lot of sense. Funding agencies like NIH and NSF are mostly concerned with how science is done, Congress is concerned with general policymaking, and the regulatory agencies both conduct research and regulate activities under their purview. Even so, Sarah did note that all these agencies do a bit of each type of policy (e.g. science diplomacy at NIH Fogarty International Center). In addition, different components of each agency have different roles. For example, individual Institutes focus more on analyzing policy for their core mission (aging at NIA, cancer at NCI, etc.), while the OSP makes policies that influence all corners of the NIH.

Sarah then described her personal duties at OSP’s Office of Scientific Management and Reporting (OSMR):
  • Coordinating NIH’s response to a directive from the President’s Office of Science and Technology Policy related to scientific collections (think preserved specimens and the like)
  • Managing the placement of AAAS S&T Fellows at NIH
  • Supporting the Scientific Management Review Board, which advises the NIH Director
  • Preparing for NIH’s appropriations hearings and responding to Congressional follow-ups
  • “Whatever fires needs to be put out”
If this sounds like the kind of job for you, Sarah recommends building a professional network and developing your communication skills ASAP (perhaps by blogging!?). This sentiment was shared by all of the panelists, and it echoes advice from our previous speakers. Sarah also strongly recommends volunteering for university or professional society committees. These bodies work as deliberative teams and are therefore good preparation for the style of government work.

For more information, check out the OSP’s website and blog. If you’re interested in any of the other speakers from this panel, I refer you to the Biomedical Postdoc Program.

Event Recap: Dr. Sarah Martin, ASBMB Science Policy Fellow

by Ian McLaughlin

On February 11th, Dr. Sarah Martin, a Science Policy Fellow at the American Society for Biochemistry and Molecular Biology (ASBMB), visited Penn to chat about her experience working in science policy. As it turns out, her story is perhaps more circuitous than one might expect.

An avid equestrian, Sarah earned a bachelor’s degree in animal sciences and a master’s degree in animal nutrition at the University of Kentucky before embarking on a Ph.D. in Molecular and Cellular Biochemistry at UK’s College of Medicine. While pursuing her degrees, Sarah realized that the tenure track was not for her, and she began exploring career options using the Individual Development Plan (IDP) provided by AAAS Careers. At the top of the list: science policy.

With an exciting career option in mind, Sarah sought ways to build “translatable skills” during her Ph.D. to help her move toward science policy. She served as treasurer, and later Vice President, of UK’s Graduate Student Congress and developed her communication skills by starting her own blog and participating in ThreeMinute Thesis.  Sarah stressed the importance of communicating with non-scientists, and she highlighted how her practice paid off during Kentucky’s first-ever State Capitol Hill Day, an event that showcases Kentucky-focused scientific research to that state’s legislators.

Sarah also shared  how she got up to speed on science policy issues, becoming a “student of policy” by voraciously reading The Hill, RollCall, Politico, ScienceInsider, and ASBMB’s own PolicyBlotter. Additionally, she started to engage with peers, non-scientists, and legislators on Twitter, noting how it’s a useful tool to sample common opinions on issues related to science.  Finally, she reached out to former ASBMB fellows for advice on how to pursue a career in science policy – and they were happy to help.

Sarah then described the typical responsibilities of an ASBMB fellow, breaking them down into four categories:
  1. Research- tracking new legislation, and a daily diet of articles regarding new developments in science and policy
  2. Meetings- with legislators on Capitol Hill, staff at the NIH, partner organizations such as the Federation of American Societies for Experimental Biology (FASEB), and others
  3. Writing- white papers, position statements, and blog posts on everything from ASBMB’s position on gene editing to the NIH Strategic Plan for FY 2016-2020
  4. Administration- organizing and preparing for meetings, composing executive summaries, and helping to plan and organize ASBMB’s Hill Day.

Sarah also talked about her own independent project at ASBMB, a core component of each Fellowship experience. Sarah aims to update ASBMB’s Advocacy Toolkit in order to consolidate all of the resources a scientist might need to engage in successful science advocacy.

Comparing the ASBMB fellowship to similar fellowships, she noted as an advantage that there is no specific end to the fellowship, which gives Fellows plenty of time to find permanent positions that match their interests.  Sarah also noted that, compared to graduate students and postdocs, she enjoys an excellent work/life balance.

Ultimately, Sarah made it clear that she loves what she does. She closed by providing the following resources from ASBMB Science Policy Analyst Chris Pickett for anyone interested in applying for the ASBMB fellowship or pursuing a career in science policy:

WHO says bacon causes cancer?

by Neha Pancholi

Note: Here at the PSPG blog, we like to feature writing from anyone in the Penn community interested in the science policy process or science for general interest. This is the 1st in a series of posts from new authors. Interested is writing for the blog? Contact us!

The daily meat consumption in the United States exceeds that of almost every other country1. While the majority of meat consumed in the United States is red meat2, the consumption of certain red meats has decreased over the past few decades due to associated health concerns, such as heart disease and diabetes1,2. In October, the World Health Organization (WHO) highlighted another potential health concern for red meat: cancer.

The announcement concerned both red and processed meat. Red meat is defined as unprocessed muscle meat from mammals, such as beef and pork3. Processed meat– generally red meat –has been altered to improve flavor through processes such as curing or smoking3. Examples of processed meat include bacon and sausage. The WHO confirmed that processed meat causes cancer and that red meat probably causes cancer. Given the prevalence of meat in the American diet, it was not surprising that the announcement dominated headlines and social media. So how exactly did the WHO decide that processed meat causes cancer?

The announcement by the WHO followed a report from the International Agency for Research on Cancer (IARC), which is responsible for identifying and assessing suspected causes of cancer. The IARC evaluates the typical level of exposure to a suspected agent, results from existing studies, and the mechanism by which the agent could cause cancer.

After a review of existing literature, the IARC classifies the strength of scientific evidence linking the suspected cancer-causing agent to cancer. Importantly, the IARC determines only whether there is sufficient evidence that something can cause cancer. The IARC does not evaluate risk, meaning that it does not evaluate how carcinogenic something is. The IARC classifies the suspected carcinogen into one of the following categories4:
  • Group 1 – There is convincing evidence linking the agent to cancer in humans. The agent is deemed carcinogenic.
  • Group 2A – There is sufficient evidence of cancer in animal models, and there is a positive association observed in humans. However, the evidence in humans does not exclude the possibility of bias, chance, or confounding variables. The agent is deemed as a probable carcinogen.
  • Group 2B – There is a positive association in humans, but the possibility of bias, chance, or confounding variables cannot be excluded. There is inadequate evidence in animal models.
  • This category is also used when there is sufficient evidence of cancer in animal models, but there is not an association observed in humans. The agent is a possible carcinogen.
  • Group 3 – There is inadequate evidence in humans and animals. The agent cannot be classified as carcinogenic or not carcinogenic.
  • Group 4 – There is sufficient evidence to conclude that the agent is not carcinogenic in humans or in animals.
The IARC reviewed over 800 studies that examined the correlation between consumption of processed or red meat and cancer occurrence in humans. These types of studies, which examine patterns of disease in different populations, are called epidemiological studies. The studies included observations from all over the world and included diverse ethnicities and diets. The greatest weight was given to studies that followed the same group of people over time and had an appropriate control group. Most of the available data examined the association between meat consumption and colorectal cancer, but some studies also assessed the effect on stomach, pancreatic, and prostate cancer. The majority of studies showed a higher occurrence of colorectal cancer in people whose diets included high consumption of red or processed meat compared to those who have low consumption. By comparing results from several studies, the IARC determined that for every 100 grams of red meat consumed per day, there is a 17% increase in cancer occurrence. For every 50 grams of processed meat eaten per day, there is an 18% increase. The average red meat consumption for those who eat it is 50-100 grams per day.3

The IARC also reviewed studies that examined how meat could cause cancer. They found strong evidence that consumption of red or processed meat leads to the formation of known carcinogens called N-nitroso compounds in the colon. It is also known that cooked meat contains two types of compounds that are known to damage DNA, which can lead to cancer. However, there is not a direct link between eating meat containing these compounds and DNA damage in the body.3

Based on the strong evidence demonstrating a positive association with consumption of processed meat and colorectal cancer, the IARC classified processed meat as a Group 1 agent3. This means that there is sufficient evidence that consumption of processed meat causes cancer.

There was a positive association between consumption of red meat and colorectal cancer in several epidemiological studies. However, the possibility of chance or bias could not be excluded from these studies. Furthermore, the best-designed epidemiological studies did not show any association between red meat consumption and cancer. Despite the limited epidemiological evidence, there was strong mechanistic evidence demonstrating that red meat consumption results in the production of known carcinogens in the colon. Therefore, red meat was classified as a probable carcinogen (Group 2A)3.

It will be interesting to see how the WHO announcement affects red meat consumption in the United States and worldwide. But before swearing off processed and red meat forever, there are a few things to consider.

First, it is important to bear in mind that agents classified within the same group have varying carcinogenic potential. Processed meat was classified as a Group 1 agent, which is the same classification for tobacco smoke. However, estimates by the Global Burden of Disease Project attribute approximately 34,000 cancer deaths per year to consumption of processed meat5. In contrast, one million cancer deaths per year are due to tobacco smoke5. While the evidence linking processed meat to cancer is strong, the risk of cancer due to processed meat consumption appears to be much lower than other known carcinogens. Second, the IARC did not evaluate studies that compared vegetarian or poultry diets to red meat consumption5. Therefore, it is unknown whether vegetarian or poultry diets are associated with fewer cases of cancer. Finally, red meat is high in protein, iron, zinc, and vitamin B123. Thus, while high red meat consumption is associated with some diseases, there are also several health benefits of consuming red meat in moderation. Ultimately, it will be important to balance the risks and benefits of processed and red meat consumption.


1http://www.npr.org/sections/thesalt/2012/06/27/155527365/visualizing-a-nation-of-meat-eaters
2http://www.usda.gov/factbook/chapter2.pdf
3Bouvard et al. Carcinogenicity of consumption of red and processed meat. The Lancet Oncology, 2015. 16(16): 1599-1600.
4http://www.iarc.fr/en/media-centre/iarcnews/pdf/Monographs-Q&A.pdf
5http://www.who.int/features/qa/cancer-red-meat/en/

Ready to Adapt: Experts Discuss Philadelphia Epidemic Preparedness

by Jamie DeNizio and Hannah Shoenhard


In early November, public health experts from a variety of organizations gathered on Penn’s campus to discuss Philadelphia’s communication strategies and preparation efforts in the event of an epidemic outbreak. In light of recent crises, such as H1N1 and Ebola in the US, AAAS Emerging Leaders in Science and Society (ELISS) fellows and the Penn Science Policy Group (PSPG) hosted local experts at both a public panel discussion and a focus group meeting to understand the systems currently in place and develop ideas about what more can be done.
Are we prepared?: Communication with the public
Dr. Max King, moderator of the public forum, set the tone for both events with a Benjamin Franklin quote: “By failing to prepare, you are preparing to fail.” Measures taken before a crisis begins can make or break the success of a public health response. In particular, in the age of the sensationalized, 24-hour news cycle, the only way for public health professionals to get the correct message to the public is to establish themselves as trustworthy sources of information in the community ahead of time.
For reaching the general population, the advent of social media has been game-changing. As an example, James Garrow, Director of Digital Public Health for the Philadelphia Department of Public Health, described Philadelphia’s use of its Facebook page to rapidly disseminate information during the H1N1 flu outbreak. The city was able to provide detailed information while interacting with and answering questions directly from members of the public in real time, a considerable advantage over traditional TV or print news.

However, Garrow was quick to note that “mass media still draws a ton of eyeballs,” and that any public health outreach program would be remiss to neglect traditional media such as TV, radio, and newspapers. At this point, social media is a complement to, but not a replacement for, other forms of media engagement.
Furthermore, those typically at greater risk during an epidemic are often unable to interact with social media channels due to economic disadvantage, age, or a language barrier. In Philadelphia, 21.5% of the population speaks a language other than English at home. Meanwhile, 12.5% of the population is over the age of 65 (U.S. Census Bureau). The focus group meeting specifically discussed how to reach these underserved groups. Some suggestions included having “block captains” or registries. “Block captains” would be Philadelphia citizens from a particular block or neighborhood that would be responsible for communicating important information to residents in their designated section. In addition to these methods of monitoring individuals, there was general agreement that there is a need for translation-friendly, culturally-relevant public health messages.

For example, during the open forum, Giang T. Nguyen, leader of the Penn Asian Health Initiative and Senior Fellow of the Penn Center for Public Health Initiatives, emphasized the importance of building ties with “ethnic media”: small publications or radio channels that primarily cater to immigrant communities in their own languages. He noted that, in the past, lack of direct contact between government public health organizations and non-English-speaking communities has led to the spread of misinformation in these communities.

On the other hand, Philadelphia has also successfully engaged immigrant communities in the recent past. For example, Garrow pointed to Philadelphia’s outreach in the Liberian immigrant community during the Ebola outbreak as a success story. When the outbreak began, the health department had already built strong ties with the Liberian community, to the point where the community actively asked the health department to hold a town hall meeting, rather than the reverse. This anecdote demonstrates the importance of establishing trust and building ties before a crisis emerges.
With regards to both general and community-targeted communication, the experts agreed that lack of funding is a major barrier to solving current problems. At the expert meeting, it was suggested that communication-specific grants, rather than larger grants with a certain percentage allotted for communication, might be one way of ameliorating this problem.
Are we prepared?: Communication between health organizations

The need for established communications networks extends beyond those for communicating directly with individuals. It is crucial for the local health department and healthcare system to have a strong relationship. Here in Philadelphia, the health department has a longstanding relationship with Penn Medicine, as well as other universities and major employers. In case of an emergency, these institutions are prepared to distribute vaccines or other medicines. Furthermore, mechanisms for distribution of vaccines already in place are “road-tested” every year during flu season. As an example, Penn vaccinated 2,500 students and faculty for the flu in eight hours during a recent vaccination drive, allowing personnel to sharpen their skills and identify any areas that need improvement.

In addition to the strong connections between major Philadelphia institutions, there is also a need for smaller health centers and community centers to be kept in the loop. These small providers serve as trusted intermediaries between large public health organizations and the public. According to the experts, these relationships are already in place. For example, during the recent Ebola crisis, the CDC set up a hotline for practitioners to call if one of their patients returned from an Ebola-stricken country with worrying symptoms. “You can’t expect everyone in the entire health system to know all they need to know [about treating a potential Ebola case],” said Nguyen, “but you can at least ensure that every practice manager and medical director knows the phone number to call.”

Can we adapt?
Ultimately, no crisis situation is fully predictable. Therefore, what matters most for responders is not merely having the proper protocols, resources, and avenues of communication in place, but also the ability to adjust their reaction to a crisis situation as it evolves. As Penn behavioral economics and health policy expert Mitesh Patel pointed out at the end of the open forum, “It’s not are we ready?, it’s are we ready to adapt?
The topic of adaptability was also heavily discussed at the focus group meeting. A lack of a central communication source was identified as a potential barrier to adaptability. So was a slow response from agencies further up the chain of command, such as the CDC. However, experts also disagreed about the precise degree of control the CDC should have at a local level. For example, representatives from local government agencies, which are more directly accountable to the CDC, expressed a desire for the CDC to proactively implement strategies, instead of attempting to direct the local response once it has already begun. Many physicians and hospital representatives, on the other hand, were of the opinion that plans formulated by the people closest to the crisis may be superior due to their situational specificity and lack of red tape. Despite this point of contention, experts agreed that there is a need for some consensus and coordination between hospitals in a particular region on how to respond to a large-scale health event.

One gap in Philadelphia’s preparedness identified by the experts in the focus group is its ability to case manage novel diseases—a challenge, since often the transmission route of novel diseases is not known. Some experts in the meeting also expressed doubt that Philadelphia is prepared for a direct biological attack. However, numerous epidemic-response frameworks already in place could potentially be repurposed for novel or deliberately-spread pathogens. In these cases, even more so than in “typical” epidemic situations, the experts identified adaptability as a key factor for success.

At the end of the open forum, the panelists affirmed the belief that Philadelphia is as prepared as it can be for an infectious disease crisis.  Furthermore, it seemed they had also moved the opinions of the event’s attendees: before the forum, attendees rated Philadelphia’s readiness at an average of 3.1 on a 6-point scale (with 0 being “not at all ready” and 6 being “completely ready”), while afterwards, the same attendees rated Philadelphia’s readiness at an average of 3.9 on the same scale (p=0.07, paired t-test).

Reminder: Science does not happen in a vacuum

by Chris Yarosh

It is very easy to become wrapped up in day-to-day scientific life. There is always another experiment to do, or a paper to read, or a grant to submit. This result leads to that hypothesis, and that hypothesis needs to be tested, revised, re-tested, etc. Scientists literally study the inner workings of life, matter and the universe itself, yet science often seems set apart from other worldly concerns.

But it’s not.

The terrorist attacks in Paris and Beirut and the ongoing Syrian refugee crisis have drawn the world’s attention, and rightfully so. These are genuine catastrophes, and it is difficult to imagine the suffering of those who must face the aftermath of these bouts of shocking violence.

At the same time, 80 world leaders are preparing to gather in freshly scarred Paris for another round of global climate talks. In a perfect world, these talks would focus only on the sound science and overwhelming consensus supporting action on climate change, and they would lead to an agreement that sets us on a path toward healing our shared home.

But this is not a perfect world.

In addition to the ongoing political struggle and general inertia surrounding climate change, we now must throw the fallout from the Paris attacks into the mix. Because of this, the event schedule will be limited to core discussions, which will deprive some people of their chance to demonstrate and make their voices heard on a large stage. This is a shame, but at least the meeting will go on. If the situation is as dire as many scientists and policy experts say it is, this meeting may be our last chance to align the world’s priorities and roll back the damage being caused to our planet. It was never going to be easy, and the fearful specter of terrorism—and the attention and resources devoted to the fight against it— does nothing to improve the situation.

This is a direct example of world events driving science and science policy, but possible indirect effects abound as well. It is not outside the realm of possibility that political disagreement over refugee relocation may lead to budget fights or government shutdown, both of which could seriously derail research in the U.S. With Election 2016 rapidly approaching, it is also possible that events abroad can drive voter preferences at home, with unforeseen impacts on how research is funded, conducted, and disseminated.

What does this mean for science and science policy?

For one, events like this remind us once again that scientists must stay informed and be ready to adapt as sentiments and attention shift in real time. Climate change and terrorism may not have seemed linked until now (though there is good reason to think that this connection runs deep), but the dramatic juxtaposition of both in Paris changes that. Scientists can offer our voices to the discussion, but it is vital that we keep abreast of the shifting political landscapes that influence the conduct and application of science. Keeping this birds-eye view is critical, because while these terrorist attacks certainly demand attention and action, they do nothing to change the urgent need for action on the climate, on health, and on a whole host of issues that require scientific expertise.

While staying current and engaging in policymaking is always a good thing for science (feel free to contact your representatives at any time), situations like the Syrian refugee crisis offer a more unique chance to lend a hand. Science is one of humanity’s greatest shared endeavors, an approach to understanding the world that capitalizes on the innate curiosity that all people share. This shared interest has always extended to displaced peoples, with the resulting collaborations providing a silver lining to the negative events that precipitated their migrations. Where feasible, it would be wise for universities across the globe to welcome Syrians with scientific backgrounds; doing so would provide continuity and support for the displaced while preventing a loss of human capital. Efforts to this effect are currently underway in Europe, though it is unclear how long these programs can survive the tension surrounding that continent.

For good and ill, world events have always shaped science. The tragedies in France, Syria, and elsewhere have incurred great human costs, and they will serve as a test of our shared humanity. As practitioners of one of our great shared enterprises, scientists have a uniquely privileged place in society, and we should use our station to help people everywhere in any way possible.

Communicating about an Epidemic in the Digital Age - Live Stream of Forum


To watch this event in real time, please follow this link (from 530 - 7pm, 11/4)


How prepared are Philadelphia’s institutions to communicate with the public in the event of a future epidemic? What specific challenges were successfully or unsuccessfully addressed during the Ebola crisis that could provide learning points going forward? Are there successful models or case studies for handling communication during epidemics that are worth emulating?

These questions will be up for debate on Wednesday at the University of Pennsylvania in a forum open to the public. The event will be held in the Penn bookstore (3601 Walnut St.) upstairs meeting room from 5:30 to 7 p.m. on Wednesday, November 4.

To learn more about this event, please read our preview article.

New funding mechanism aims to bring balance to the biomedical research (work)force

by Chris Yarosh

This past March, the National Cancer Institute (NCI) announced a new funding mechanism designed to stabilize the biomedical research enterprise by creating new career paths for PhD-level scientists. That mechanism, called the NCI Research Specialist Award (R50), is now live. Applications (of which there will likely be many) for the R50 will be accepted beginning in January, with the first crop of directly-funded Research Specialists starting in October 2016. More details about the grant can be found in the newly released FOA.

Why is this a big deal? In recent years, there have been increased calls for reform of the biomedical enterprise. More people than ever hold PhDs, and professor positions (the traditional career goal of doctorate holders) are scarce. This leaves many young researchers trapped somewhere in the middle in postdoctoral positions, something we've talked about  before on this blog. These positions are still considered to be training positions, and without professor openings (or funding for independent labs), these scientists often seek industry positions or leave the bench altogether in lieu of finding academic employment.

On the flip side, modern academic labs are highly dependent on a constant stream of graduate students and postdocs to do the lion’s share of the research funded by principal investigator-level grants (R01s). This creates a situation where entire labs can turn over in relatively short periods of time, possibly diminishing the impact of crucial research programs.

But what if there was another way? That, in a nutshell, is the aim of the R50. By funding the salaries (but not the research costs) of PhD-level researchers, the R50 seeks to create opportunities for scientists to join established research programs or core facilities without having to obtain larger grants or academic appointments. This attempts to kill two birds with one stone: more jobs for PhDs, less turnover in labs already funded by other NCI grants.

This approach is not all roses, however. For one, this doesn’t change the fact that research funding has been flat or worse in recent years. Even with more stable staffing, the amount of research being completed will continue to atrophy. Moreover, the money for future R50s will need to come from somewhere, and it is possible that this will put additional strain on the NCI’s budget if overall R&D spending is not increased soon. Lastly, there are some concerns about how the R50 will work in practice. For example, Research Specialists will be able to move to other labs with NCI approval, but how will this actually play out? Will R50s really be pegged to their recipients, or will there be an implicit understanding that they are tied to the supporting labs/institutions?

It should be noted that this is only a trial period, and that full evaluation of the program will not be possible until awards are actually made. Still, this seems like a positive response to the forces currently influencing the biomedical research enterprise, and it will be interesting to see if and when the other NIH institutes give something like this a shot.

Communicating about an Epidemic in the Digital Age

**Link for live streaming of this event can be found here**

by Hannah Shoenhard, Jamie DeNizio, and Michael Allegrezza

Craig Spencer, a New York City doctor, tested positive for Ebola on October 23. The story broke online the same day, and by the next morning, tabloids were plastered with images of masked and gowned health workers with headlines such as Bungle Fever and Ebola! Late-night comedy, Twitter, local news: the story was inescapable, the hysteria palpable. All in all, only eleven Ebola patients were treated on U.S. soil. But the media’s reaction affected the lives of anyone who watched television or had an internet connection.

The Ebola epidemic in Africa has died down. Liberia is Ebola-free, while Sierra Leone and Guinea continue to report cases in the low single digits per week. Most promisingly, a new vaccine has been shown to be highly effective in a clinical trial. Given the vaccine, it seems that the likelihood of future epidemics on the scale of the one in 2014 is low. But especially during the early days of the epidemic, miscommunication and mistrust of international public health workers slowed the medical response and exacerbated the epidemic. And, as the reaction to the New York City case shows us, this problem is not unique to West African countries.

Even if the threat from Ebola in particular is under control, infectious disease is endemic to civilization. Knowing that new epidemic threats can emerge at any time, important questions need to be considered. 

How prepared are Philadelphia’s institutions to communicate with the public in the event of a future epidemic? What specific challenges were successfully or unsuccessfully addressed during the Ebola crisis that could provide learning points going forward? Are there successful models or case studies for handling communication during epidemics that are worth emulating?

These questions will be up for debate on Wednesday at the University of Pennsylvania in a forum open to the public. The event will be held in the Penn bookstore (3601 Walnut St.) upstairs meeting room from 5:30 to 7 p.m. on Wednesday, November 4.

The event is hosted by two graduate student groups at Penn, the Emerging Leaders in Science and Society (ELISS) Fellows and the Penn Science Policy Group with the goal of fostering collaborative ideas to develop effective channels to manage trust, fear, and accurate communication during potential future epidemics.

On the panel for the forum will be three innovators in communicating public health issues, Dr. Mitesh Patel, MD, MBA, MS, James Garrow, MPH, and Dr. Giang T. Nguyen, MD, MPH, MSCE. Moderating the discussion will be Dr. Max King, Ph.D., Associate Vice Provost for Health and Academic Services at Penn.

Community members are encouraged to attend the forum with questions and comments. People can also watch a live stream of the event (check our twitter page for the link) and submit questions via twitter with #EpidemicPhilly.


Biographies of the panelists and moderator


Mitesh Patel
 
Mitesh S. Patel, MD, MBA, MS is a board-certified general internist, physician scientist, and entrepreneur. He is an Assistant Professor of Medicine and Health Care Management at the Perelman School of Medicine and The Wharton School at the University of Pennsylvania. His work focuses on leveraging behavioral economics, connected health approaches and data science to improve population health and increase value within the health care delivery system. 

As a physician-scientist, Mitesh studies how we can utilize innovative technology and connected health approaches to passively monitor activity and how we can use health incentives to motivate behavior change. His prior work has been published in NEJM, JAMA, the Annals of Internal Medicine and featured in the New York Times, NPR, and CNN. Mitesh also co-founded, Docphin, a startup that strives to improve the application of evidence-based medicine into clinical practice.

Mitesh holds undergraduate degrees in Economics and Biochemistry as well as a Medical Doctorate from the University of Michigan. He obtained an MBA in Health Care Management from The Wharton School and an MS in Health Policy Research from the Perelman School of Medicine at the University of Pennsylvania. Mitesh completed his internal medicine residency training and the Robert Wood Johnson Clinical Scholars Program at the University of Pennsylvania.

James Garrow

James Garrow, MPH is a nationally recognized proponent and advocate for the use of social media and digital tools in the execution of public health activities. His role as the Director of Digital Public Health in Philadelphia is among the first in the country charged with using new digital tools and techniques like social media, crowdsourcing, and big data utilization. He also provides media relations support for the Philadelphia Department of Public Health.

An accomplished public speaker and noted thought leader, Jim has been invited to and spoken at conferences across the US on social media use in public health and emergency response. He is an active social media user, maintaining two regularly scheduled Twitterchats and a blog on crisis and emergency risk communications.

Jim obtained a B.S. in Applied Sociology from Drexel in 2001 and a Master’s of Public Health from Temple in 2011. 


Giang T. Nguyen

Dr. Nguyen, MD, MPH, MSCE, is an Assistant Professor in the School of Medicine, Department of Family Medicine and Community Health at Penn.  He is also Chair of the MD-MPH Advisory Committee and a member of the MPH Program Curriculum Committee.

Dr. Nguyen leads the Penn Asian Health Initiatives. His research focus is in Asian immigrant health with concentrations in cancer control, disease prevention, and community-based participatory research. His community engagement work has included outreach to Vietnamese and other Southeast Asian refugees, health fairs and immunization clinics, cancer education workshops, advocacy, HIV/AIDS, and LGBT issues. He serves on boards and advisory committees for several Asian serving organizations, including the Asian and Pacific Islander National Cancer Survivors Network.

Dr. Nguyen is also the Medical Director of Penn Family Care, the clinical practice of the University of Pennsylvania's Department of Family Medicine and Community Health. He provides direct care to adult and pediatric patients in the primary care setting and teaches medical students and family medicine residents. He also is a Senior Fellow of the Penn Center for Public Health Initiatives, where he is a core faculty member for the Penn MPH program.
Max King
Previously the coordinator of Penn State's University Scholars Program, Max King, Ph.D., MS, is now Associate Vice Provost for Health and Academic Services at The University of Pennsylvania.
Dr. King holds three degrees from Penn State: a B.S. in Biological Health, M.S. in Health Education, and Ph.D. from the Interdisciplinary Program in Educational Theory and Policy. His research focus is the multidimensional Methodology of Q-Analysis, or Polyhedral Dynamics, a higher-level structural analysis approach derived from algebraic topology.

Dr. King also held an appointment as an Affiliate Assistant Professor and a member of the graduate faculty in the Department of Administration, Policy, Foundations, and Comparative/International Education at Penn State. He taught educational foundations, comparative education, British education, research methods, and international education. He also has extensive experience in computer systems, developing mainframe and microcomputer research and thesis applications.

NIH to chimera researchers: Let's talk about this...

by Chris Yarosh

When we think about the role of the National Institutes of Health (NIH) in biomedical research, we often think only in terms of dollars and cents. The NIH is a funding agency, after all, and most researchers submit grants with this relationship in mind. However, because the NIH holds the power of the purse, it also plays a large role in dictating the scope of biomedical research conducted in the U.S. It is noteworthy, then, that the NIH recently delayed some high profile grant applications related to one type of research: chimeras.

Chimeras, named for a Greek mythological monster composed of several different animals, are organisms that feature cells that are genetically distinct.  In the lab, this commonly refers to animals that contain cells from more than once species. Research into chimeras is not new; scientists have been successfully using animal/animal (e.g. sheep/goat) chimeras for over 30 years to learn about how animals develop. Human/animal chimeras are also a common research tool. For example, the transfer of cancerous human tissue into mice with weakened immune systems is standard practice in cancer biology research because it allows researchers to test chemotherapy drugs in a system that is more complex than a dish of cells before testing them in human subjects. These experiments are largely uncontroversial, save for individuals who fall into the anti-animal testing camp (and those who dispute the predictive power of mouse models in general). Why then, has the NIH decided to pump the brakes on this line of research?

Like many things, the answer lies in the timing. The temporarily-stalled research involves injecting human pluripotent cells—undifferentiated cells that can develop into any number of different cell types—not into mature animals, but instead into animal embryos. Unlike the tumor-in-a-mouse research mentioned above, this kind of experiment is specifically trying to get normal human cells to develop as an animal matures and remain, well, normal human cells. One idea is that someday we could grow an organ (liver, pancreas, etc.) in an animal, such as a pig, that is still a human organ. This would lower the barrier for successful transplantation, meaning that somebody in serious need of a new liver could receive one from livestock instead of waiting for a human donor from a transplant list. Another thought is that chimeric animals will better model human physiology, making subsequent clinical trials more accurate.

If you read the last paragraph and felt a bit uneasy, you’re not alone. For some, this type of research crosses the invisible line that separates humans from animals, and is therefore unacceptable. Others find this research troubling from an animal welfare standpoint, and still other worry about unanticipated differentiation (e.g. “we wanted a liver, but we found some human cells in the pig’s nerves, too”) or unethical uses for this type of technology.

The NIH hears these concerns, and wants to talk about them before giving scientists the go ahead to use public funds on this type of research. Some researchers have reacted negatively to this, fearing broader restrictions in the future, but I think this is an important part of the scientific process. We live (and for scientists, work) in an era of unprecedented ability to modify genomes and cell lineages, and human/animal chimeras are just one example of a type of research destined for more attention and oversight. It is important to get the guidelines right.

The NIH will convene a panel of scientists and bioethicists to discuss human/animal chimera research on November 6th, so keep an eye out for possible policy revisions after then. Given the promise of this type of research and the potential concerns over its use, this surely is only the beginning of the deliberative process.

UPDATE (11/05/2015): Scientists from Stanford University have posted an open letter in Science calling for a repeal of the current restrictions in this field. The full letter, found here, argues that there is little scientific justification for the NIH's stated concerns. Over at Gizmodo,  the NIH has responded by claiming that the true purpose of the stop order and review is to "stay ahead" of current research and anticipate future work. This is consistent with the NIH's views as articulated on the Under the Poliscope blog. All things considered, the workshop tomorrow, and any guidelines resulting from it, should be very interesting for people who wish to develop and use these tools.

Kickoff Meeting!

Hey everyone,

Please join us for our kickoff meeting this Thursday, September 17th in BRB 1412 at 6PM. We will be previewing the year ahead, and VP Chris Yarosh will talk about advocacy and his experience with ASBMB Hill Day. Refreshments included!