For The People: The Value of Inclusivity in Scientific Research

by Clara Malekshahi

The world of scientific research is notoriously exclusionary. The use of jargon in papers, in presentations, even in interviews, impairs the public’s understanding of science and their ability to engage with research. The high barrier for participation in science breeds widespread exclusion of underrepresented minorities: twice as many underrepresented minority students switch out of a scientific field of study than students of white or Asian descent. Exclusion in scientific research is especially concerning in biology and biomedical fields, which so directly impact peoples’ lives. In particular, those in biochemistry, pharmacy, biology, and other biological fields are more likely to have graduate degrees than in almost any other field, restricting access to this knowledge to those with advanced degrees. All of this hinders engagement with the larger, non-scientific community. And yet, science is meant to serve the community at large. Such a vast disconnect makes it easier to spread misinformation and reduce the credibility of scientists. 

But like a living organism, science moves and grows. Scientists are increasingly involving the public in many ways. Public effort is being used for data collection and project development. There are more and more programs in place to engage younger students — down to the high school level —in scientific discovery and learning. The non-scientific community is being included in scientific decisions and policy. There are also newer ways in which individual scientists are reaching out to the public, such as TikTok. 

Publically-powered Databases

One way in which science is beginning to involve the public is through large databases. SmokeSense is run by the Environmental Protection Agency (EPA), and asks users to monitor their experiences with smoke and wildfires. In turn, the EPA disseminates vital information about air quality in an easily accessible app. Participants have been excited about the possibility of increased communication with organizers and responders in the event of a wildfire. It also means that individuals across multiple local communities have more power, and groups particularly sensitive to wildlife smoke may benefit from more targeted protections

Another database widely dependent on public input is eBird. Developed and maintained by the Cornell Ornithology Lab, eBird is a bird-watching database. Birders from around the globe upload their sightings in the form of checklists, and the eBird team turns this into interactive maps of the seasonal abundance and movement trends over time of nearly 3,000 bird species. Data from eBird has been used to study everything from the response of birds to climate change to the role of birds in disease prevalence and spread. All of this publicly-sourced data encourages involvement and interest from a wider audience.

Community Involvement in Research and Publishing

There are even more ways in which scientists are working closely with the public to carry out science. Perelman School of Medicine’s Dr. Eugenia South works on community science that directly benefits the people involved. Her research explores the effects of environmental interventions on the mental and physical wellbeing of neighborhoods in Philadelphia, especially minority neighborhoods. As part of her commitment to scientific inclusion, Dr. South works with several community coordinators on her team. With the help of the community, Dr. South has shown that interventions intended to turn vacant lots into green spaces decreased self-reported poor mental health and depression. In the 117 areas that received new planting in addition to trash removal and regular maintenance, there was a roughly 50% drop in depression among respondents. In neighborhoods below the poverty line that received new green spaces, that drop in depression was even higher. Her work also connects communities with urban designers to create more green spaces. Through the Deeply Rooted community collaborative, Dr. South’s team even encourages grant applications by the public. The creation of these new connections is vital to increase public engagement with scientists: there is a tangible, rapid effort towards change. 

This trend of public involvement extends to younger generations. Scientists are actively involving members of the public in peer-reviewed research, starting as early as high school. The Journal of Student Research, for example, specifically targets young potential scientists. In the past 13 years, they’ve published over 2,000 articles with a specific target towards high schoolers from all over the world. And what are these high schoolers writing about? Everything, from machine learning, to multi-omics of molecular pathways, to explorations into the impacts of climate change. The journal is supported by a vast editorial team of PhD-level scientists, each of whom engages with these students as fully fledged scientists. 

The Journal of Student Research is not the only avenue for high schoolers to engage in scientific publications. In early 2022, the Microbiology Spectrum journal published a paper on avian paramyxovirus in New York City. To learn as much as possible about the prevalence of disease in pigeons in New York City, this Mount Sinai-based team assembled a crew of community scientists: The New York City Virus Hunters. Five of these virus hunters, who became co-authors on the paper after sampling and analyzing the data, were high school students. The engagement with the New York City Virus Hunters and the high schoolers are huge steps towards actively including the community in scientific activities.

Social Media - the Bad and the Good

Part of the increased engagement of the public with science comes from increased accessibility of science on social media. In many ways, this is incredibly detrimental for both the public and scientists. Influencers and communicators on social media are often paid to disseminate information by industries that may want to increase sales, such as the tobacco industry. Unfortunately, misinformation and false news are often more viewed and spread than the truth. This misinformation is incredibly confusing and can erode trust in science and scientists. Although 54% of Americans get science news from general news outlets, only 28% of American adults believe that these news outlets get their facts right. This suggests a significant amount of skepticism amongst U.S. adults. 

Despite these flaws, social media is nonetheless partially responsible for increased accessibility, and is sometimes even helpful. There are scientist influencers all over the Tik Tok, as well as doctors and public health officials helping to disseminate key information. These science influencers have followers that might number in the tens of thousands, and they are increasing the scientific understanding of the general public everyday. Additionally, social media giants have successfully partnered with health agencies like the WHO to disseminate information about COVID-19 vaccinations

For too long, the subjects of scientific research—the general public—have not been included in active research. Citizen science databases are a great first step. But getting communities involved in making beneficial changes, and actively taking steps toward such changes, as Dr. South has done, is even better. That, finally, is the goal: to work together with the communities we serve to improve quality of life. And there is no lower limit to when to start engaging the public; high schoolers are increasingly getting involved in publishing research. All of this signals a change in the accessibility of science, and strengthens trust in scientists at a time when each new morsel of misinformation erodes credibility.


Who’s Afraid of the Big Bad AI?

by Katherine Huang

Earlier this year, during an icy spell in Boston, I attended my first American Association for the Advancement of Science meeting. On the last day of the meeting, I ran into a fellow attendee on my way back to the venue from getting lunch. We were both mildly lost, but once we found the right direction and could walk in a more relaxed manner, she remarked to me that there was a surge in the number of Artificial Intelligence (AI)-related panel sessions at the meeting this year, compared to last year. 

While an annual meeting that draws some four to five thousand attendees is only a small sample of the population, this surge reflects a broader rise in the presence of AI in our work and personal lives. It is a change that has sparked mixed reactions: according to a recent Pew Research Center study, 51% of adults in the US are more concerned than excited about increased AI usage in daily life. 

So, how wary should we be of AI technologies? And to what extent does our wariness come from media hype, as opposed to concrete problems that AI has already presented us with? 

Sensationalism in the Media

News outlets frequently dramatize their content to appeal to our emotions and curiosity, thereby boosting their readership. When it comes to AI, headlines like, “Will AI take over the world and all our jobs?” immediately grab our attention and can make us fearful and antagonistic towards AI. 

In the pursuit of sensationalist, science fiction-inspired news stories about AI, journalists can become biased in the kinds of information they want from experts. Brian K. Smith, a professor of computer science and education at Boston College, recalls a reporter asking him about the possibility of an AI apocalypse in which robots rise up against humans, à la Terminator or RoboCop. Smith challenged the reporter: “You’ve given me science fiction examples. Is there any instance so far in the history of humanity where machines have tried to rise up and destroy the human race?” The reporter ended the interview right there. 

“People are preoccupied with this one sort of model where AI is imagined as a person that will replace people,” said Ted Underwood, a professor of English and information sciences at the University of Illinois Urbana-Champaign. “Stories like that get a lot of attention.”

AI experts themselves can also be guilty of over-hyping their work. In 2016, “godfather of AI” and 2024 Physics Nobel laureate Geoffrey Hinton suggested that we should no longer train radiologists because AI would outperform humans in that field within the next five years, or “it might be ten.” Since that prediction, the radiology staff at the Mayo Clinic has increased by 55% to include more than 400 radiologists. Across the US, the American Board of Medical Specialties reported that there were 59,525 board-certified radiologists in June of 2019. By June of 2024, that number had risen to 66,962. 

MIT labor economist David Autor opined that predicting AI will steal jobs from humans underestimates how complex human jobs actually are. Indeed, AI did not replace radiologists. Rather, it has helped radiologists work better, such as by identifying images that likely show an abnormality so they know where to look first, or by taking accurate, reproducible measurements that are variable when done by hand. Human radiologists are then still needed to interpret the images and numbers, make conclusions, and communicate their findings to patients and other physicians. Admirably, Hinton has acknowledged that he overstated AI’s rendering of human radiologists obsolete, updating his view to be that AI will make them more efficient. 

“I would like to see [media] coverage that spends more time thinking about how people can expand or augment what they’re doing, rather than the people versus robots script,” said Underwood. He highlighted that we need greater technology literacy for using AI tools in the workforce, as currently, “Most people don’t have the understanding to take full advantage of them, even if they technically have access.” 

In fact, many may only understand a small slice of what constitutes AI. “At this moment in time, when people say ‘AI,’ they almost always mean ChatGPT,” said Smith. 

ChatGPT, Generative AI, and the Humanities

Since entering the chat—pun very intended—in late 2022, ChatGPT has essentially become the face of AI. Yet, AI research began much earlier in the 1950s and its results have long been in our everyday lives, from Siri to email filters and even Roombas. 

Smith credits the equivalence of ChatGPT with all of AI to the branding genius of OpenAI, the company behind ChatGPT. “OpenAI did such a phenomenal job by putting a simple interface on top of their language model and calling it ‘ChatGPT,’” he said. “It became so public-facing [while] the rest of what we talk about as AI is always buried.” 

AI is an umbrella term for computer-based technologies that perform tasks by trying to mimic human thought. ChatGPT is an application of generative AI, a newer type of AI that learns patterns from data and imitates them to produce text, images, video, code, and other content based on inputs or prompts. In contrast, traditional AI involves systems that mainly analyze and predict, rather than produce content. 

If we restrict our impressions of AI to ChatGPT and other generative AI tools, it is easy to see why AI has received such backlash in recent years. Students have used ChatGPT to write essays that were assigned to teach critical thinking and communication skills, defeating the purpose of those assignments. People can rapidly spread mis- and dis-information with AI-generated content, and while some instances might be light mischief, others can seriously harm reputations and hinder emergency responses. Additionally, scammers have used AI to write text for phishing emails and advertise fake products, cheating people out of time and money.  

While many can denounce such misuses of generative AI, a squishier issue involves using generative AI to produce art and writing for private enjoyment. Certainly, training AI systems on copyrighted material without their creators’ permission evokes ethical concerns. So does taking business away from human artists and writers when the creative industry already tends not to pay well. But what if someone who cannot afford to commission a human artist just wants to Studio Ghibli-fy a family photo with ChatGPT to put on their desk? 

If the technology is available, it is hard to persuade people not to use AI-generated work because they “should” help human creatives make a living, or because the output does not have enough human labor behind it to be “real” art. The focus then turns to what companies might do to restrict access to these technologies, or to pay creatives for using their intellectual property to train generative AI. 

Underwood, whose research involves text-mining digital libraries, has thought about fair use for a long time. He believes training AI models with copyrighted material should be fair use as long as the material is not too new—say, created within the last five years—and as long as the models are only extracting patterns from it as opposed to reproducing it, which is legally prohibited. He offers this perspective on the flip side: 

“Suppose we rule that training is not fair use. You have to pay publishers for their archives in order to train on them. The big tech companies are perfectly capable of doing that; they absolutely could buy up publishers’ back catalogs. If we determine that the right to train is something that can be bought and sold, we’d now have big intellectual monopolies where every professional in the country has to pay $500 a month to [a company] to have their [AI] model, which is the exclusive model. That’s much more terrifying to me than the idea that these models are going to compete with creatives. If we have [AI technology] open, creatives can figure out how to use it themselves and still have a job. If it’s a market that can be cornered, there is potential for concentration of power and increase of economic inequality. To me, the intellectual property argument against AI has a high chance of backfiring on those important pieces.”

Though the concept of AI versus humans looms large, generative AI competition against human creatives is not the only possible relationship between AI and the humanities. Memo Akten, Sougwen Chung, and other acclaimed artists have already integrated AI into their work. Underwood sees promise in using large language models (LLMs) to computationally analyze vast amounts of text for drawing conclusions about literary history, a field of research known as distant reading. In the past, he has done this with statistical models to study trends like gender stereotypes across two centuries of written fiction. More recently, he has shown an example of how an LLM-based AI model can not only be more accurate than a statistical one in estimating time passage in fiction—another trend he has studied before—but can also be more interpretable in documenting its “thought process” verbally. 

“I don’t see us just not doing the technology,” Underwood said. During a presentation at the Wolf Humanities Center at Penn, he recommended that humanities scholars ought to work with AI developers to help their models represent a more diverse array of cultural perspectives. In computer science terms, this is called pluralistic alignment. Fortunately, Underwood notes, computer scientists recognize this need for humanities input and better cultural representation. 

Panic, Policy, and Regulation 

Patrick Grady and Daniel Castro from the Center for Data Innovation describe the phenomenon of the “tech panic cycle” as a bell curve of public concern. In this curve, Trusting Beginnings in a new technology are followed by a Rising Panic that eventually reaches a Height of Hysteria. This hysteria then subsides in a Deflating Fears stage, and finally we reach a Point of Practicality in Moving On from the panic. 

In April of 2023, Grady and Castro placed the hysteria surrounding generative AI, due to the introduction of ChatGPT, in the Rising Panic segment of the curve (Figure 1). Now, two years later, Smith believes we are still in the same vicinity. He noted that in practice, the curve is not smooth, but rather contains many oscillations, and different sectors of people—educators, creatives, etc.—can be at different points along the curve. Overall, though, “I don’t think we’ve come as far as Practicality [yet],” Smith said. “It’s a moving target.” Printed books, recorded sound, and motion pictures all went through tech panic cycles, suggesting that at some point, we will reach an overall Point of Practicality with generative AI—and AI broadly—as well. 

Figure 1. The tech panic cycle. Grady and Castro place generative AI in its Rising Panic phase as of April 2023. Taken from https://datainnovation.org/2023/05/tech-panics-generative-ai-and-regulatory-caution/.


This is not to say that we should just sit back and let the curve happen; we can still make more careful choices about how generative AI tools are developed and implemented. The extent to which this should involve regulation at the policy level is a thornier issue. Some, such as Grady and Castro, think that regulating generative AI will only stifle innovation, even with the flexibility that guidelines like the EU AI Act offers. 

“I think there is room to say, ‘we want to be responsible,’” Smith believes, citing policy development surrounding nuclear energy and biotechnology. “I think you could do that without saying, ‘and then there will be no more innovation.’ There have to be some kind of rules or constraints that govern the decision-making around those things, and also conversations with developers where they’re in communities aligned with people who could help them see, ‘there’s a potential problem here.’” 

He understands Grady and Castro’s point, though: “A law is built on precedent, so it’s all going to be [hypothetical] until some specific cases come up.”

Our Future With AI

Many grievances against AI are not intrinsically technology issues, but social issues that AI forces us to reckon with. The panic people feel at the idea of losing their jobs to AI’s cheaper labor is a capitalism problem. Cheating in school did not start because of ChatGPT. Academic integrity, which Smith likes to say is the “AI” we are actually concerned about, is more about how teachers are teaching and assigning work, and what kinds of values students are learning from the people around them. Data centers that house AI machinery guzzle enormous amounts of water and electricity. This is an environmental issue that younger generations, trending more eco-conscious, may want to weigh against cheating in school with ChatGPT and other LLM-based tools. Misinformation is not a new phenomenon, either. AI has just made it easier and faster to produce, which means the general public needs more digital media literacy alongside improvements in watermarking AI-generated content

There is no promise, as Underwood noted in his talk at Penn, that the good AI can do will outweigh its consequences. AI’s many associated problems, beyond what I have covered here, are very real and should not be dismissed in the name of technological progress. But, some aspects of AI are perhaps not as awful as their reputation to date, and we can still be optimistic about our ability to build and harness AI systems for good. 

When I asked Underwood if there was anything he particularly looked forward to AI being able to do, he said, “I would love to see [AI models] be able to tell a suspenseful story. Right now they don’t understand suspense.”

Will generative AI ever get there?

I guess we will find out. 



At a crossroads: has public trust in science been declining in the 2020s?

by Carol Garcia

Science is a constant. From cooking a meal to getting a vaccine, scientific advancements have improved the daily lives of people all over the world. But how much does the average person living in the U.S. trust science? To answer this question, we must look at different demographics and time periods, specifically before and after the COVID-19 pandemic. There is indeed a range of opinions on controversial topics like climate change, which can often lead to questioning the integrity of scientific work itself. Additionally, flashy headlines such as “Can the public’s trust in science-and scientists- be restored?”, have placed the scientific community under scrutiny, which has led some Americans to distrust science in the last five years. An article about the decrease of trust in science reported that in April of 2020, 87% of Americans expressed “at least a fair amount of confidence” in scientists,  whereas in a similar survey taken in November of 2024, 76% of the population believed that “scientists act in the best interest of the public”. Currently, as the scientific community reels from a flurry of executive orders and funding cuts, some scientists believe this will “erode the public trust in science” once again. In this piece, we briefly discuss trust in science during and after the COVID-19 pandemic and how this major event has impacted the public’s view of science. 

Read more

The Mental Health Crisis of STEM Graduate Students: How Advisor Restructuring and Evidence-based Policy Can Help

by Carol Garcia

The mental health of academics across the country in recent decades has been in decline. From undergraduate students to early career researchers, concern is being expressed about the heavy toll that academic pressures put on mental health. In a study about the prevalence of anxiety and depression amongst graduate students, it was found that20-50% of graduate students report symptoms during their training. Attending graduate school in any field is stressful, but due to the nature of STEM (science, technology, engineering, and mathematics) research, mental health issues can be particularly prevalent among graduate students in thesciences. Across the country, many seek solutions from their own institutions and come to understand that often, these very institutions do not have the resources necessary to handle a mental health crisis.

Read more

Will Our Climate Change Faster Than Our Attitudes Towards It Can?

by Skyler Berardi

This is the third post in our series on the consequences of outside influences on the performance and communication of science.

This November, National Geographic uploaded a post on Instagram titled, “Climate change could impact where we live. Are these cities ready?” I had already been researching public attitudes towards climate change for this article, so I was curious to see what discourse was happening in the comments section. I scrolled and found an all-too-familiar debate. A handful of folks were posting in support of climate change action, including one person who penned, “to protect Earth is a burning topic and must be focused upon.” Then, there were the dissenters:

Read more

Medicines Shaped by Profit and Politics

by Gabriel Iván Vega Bellido

This is the second post in our series on the consequences of outside influences on the performance and communication of science.

Medical research has a history going back to when the Egyptians started documenting the medicinal properties of plants, but it has drastically changed in the context of the industrial revolution and capitalism. Though these changes have undoubtedly contributed to medical advancements and an increase in longevity, the interests of industry can often be in tension with optimal human health. Some notable examples of this include the enormous fast and processed food industries' contributions to the current obesity epidemic, and the continued promotion of fossil fuels by large oil companies despite having knowledge of their environmental impact for at least 50 years. Given the multibillion dollar size of the medical research industry, there is ample room for such conflicts of interest. This blog will examine how the external interests of industry and politics have recently shaped medicine of the pharmaceutical and psychedelic kind. 


Read more

On The Rise and Fall of Psychedelic Research: Ethical Lessons For Its Revival

by Clara Raithel

This is the first post in our series on the consequences of outside influences on the performance and communication of science.

More than fifty years after the criminalization of psychedelic drugs, psychedelic research is experiencing what many call a “renaissance” [1,2]. Researchers across the world conduct clinical trials testing the efficacy of psychedelic drugs, such as ecstasy (MDMA), mescaline, lysergic acid diethylamide (LSD), and psilocybin, as a form of treatment against various mental diseases, with some pointing towards therapeutic potential [3,4]. This increase in scientific interest is not a first: in the 1950s and 1960s, researchers first got their hands on psychedelics and saw great potential in these compounds to allow for a breakthrough in mental health science. However, instead of describing a breakthrough, psychedelic drugs gained a bad reputation and were effectively banned in 1970 when the US government initiated its “war on drugs”. How did the perception of psychedelic drugs change so drastically? Contemporary psychedelic scientists often put the blame on counterculture figures. The most prominent scapegoat is Dr. Timothy Leary, a Harvard psychologist who lightheartedly shared the drug at parties, evoking a bulk of negative press surrounding LSD that ultimately justified the “war on drugs” in the public eye. But explaining the bad reputation of past psychedelic research takes more than just pointing fingers at “bad scientists”.

Read more

Reflecting on depictions of scientists over the centuries

By Gabriel Iván Vega Bellido 

This is the ninth post in our series about how science is communicated and the consequences thereof.

Most people don’t interact with professional scientists on a regular basis. Therefore, the depictions of scientists in popular media play a significant role in influencing the general public’s expectations, trust, and understanding of the scientific community. If you were to ask your friends and family who aren't in scientific fields about their understanding of a scientist, their responses would likely be shaped by a mix of both real-life and fictionalized portrayals encountered through various media. This post aims to contemplate how some of the most popular depictions of scientists in English-speaking media, including various works of fiction, have reflected and influenced the public’s perception of science and scientists.

Read more

Engaging Communities to Build Trust for More Effective Medical Treatment and Scientific Research

By David Sidibe

 This is the eighth post in our series about how science is communicated and the consequences thereof.

Mistrust in the scientific and medical community is reaching a boiling point. You don’t have to look too far to find evidence of this fact, with the politicization of medicine and vaccines during the Covid-19 pandemic. While I would like to say that there is one cause of this mistrust, the reality is that this mistrust is multi-faceted and rooted in a history of careless and, quite frankly, horrific treatment of patients. The impact of mistrust of the medical community among people of color and people from underrepresented minority (URM) backgrounds is of particular concern. Ethnic, racial, and gender health disparities are prevalent in communities throughout the US. Current and historical events (see Tuskegee Syphilis Study, HeLa cells, and the AIDS epidemic for examples) have only further amplified the mistrust within URM communities. 

Read more

Philly Spotlight: Philly AIDS Thrift @ Giovanni's Room

By Kay Labella

For Pride Month, the PSPDG blog is collaborating with students from LTBGS and Lambda Grads for a series of posts highlighting the LGBTQ+ community and related matters at Penn and beyond.

Looking for your new favorite bookstore, ideally one with a rich and storied history? Well, look no further than Philly AIDS Thrift @ Giovanni's Room. The oldest queer bookstore in the United States still in operation, Giovanni’s Room was founded in 1973 by Tom Wilson Weinberg, Dan Sherbo and Bern Boyle. These three members of the Gay Activist Alliance (GAA) created a space that was part bookshop and part community center for both locals and visitors alike. The store has changed hands and locations several times over the years, including its purchase in 2018 by Philly AIDS Thrift, but the mission remains the same: be the number one source for new and lightly used LGBTQ fiction & non-fiction, comics, music, artwork, and more. Giovanni’s Room offers over 7,000 titles on the shelves as well as their database of more than 48,000 titles, a veritable treasure trove of queer literature.

Biography Spotlight: Ben Barres

By Kay Labella

For Pride Month, the PSPDG blog is collaborating with students from LTBGS and Lambda Grads for a series of posts highlighting the LGBTQ+ community and related matters at Penn and beyond.

Dr. Ben A. Barres (MD/PhD) was born September 13, 1954, in West Orange, New Jersey. After graduating from Massachusetts Institute of Technology with a B.S. in Biology, he went on to obtain his medical degree from Dartmouth Medical School in 1979. While in his neurology residency at Weill Cornell Medicine, Barres found himself intrigued by neurodegeneration and glial cell function; he subsequently resigned his residency to pursue a PhD in neurobiology at Harvard Medical School so that he might pursue research into these subjects. In his time as a postdoc and later heading his own lab at Stanford University, Barres remained at the forefront of scientific discovery. His lab published numerous critical studies expanding upon our understanding of the role of astrocytes, microglia, and the blood-brain barrier, as well as how synapses form within the brain, among other topics. As a PI, he was well-loved and dedicated to the success of his trainees.

Read more

Science Diplomacy to Address LGBTQ+ Exclusion, Harassment, and Career Limitations in STEM

By Stefan Peterson

For Pride Month, the PSPDG blog is collaborating with students from LTBGS and Lambda Grads for a series of posts highlighting the LGBTQ+ community and related matters at Penn and beyond.

Members of the LGBTQ+ community in science, technology, engineering, and math (STEM) deal with many challenges beyond those of their non-LGBTQ+ peers. Scientists worldwide share experiences of fear of coming out to colleagues, which is reasonable when considering that harassment is 30% more likely for LGBTQ+ individuals in STEM than their non-LGBTQ+ peers. Harassment and discrimination against LGBTQ+ people in STEM result in higher rates of queer students dropping out of STEM majors and of LGBTQ+ scientists planning to leave their careers. Nations and institutions around the world urgently need to address these issues to support the STEM LGBTQ+ community. Over the past two years, one team of early career scientists has been working to address these issues in the United States and the United Kingdom through science diplomacy.

Read more

Hidden Inconvenience: The Search for Gender-Inclusive Restrooms at UPenn

By Maxwell Pisciotta

For Pride Month, the PSPDG blog is collaborating with students from LTBGS and Lambda Grads for a series of posts highlighting the LGBTQ+ community and related matters at Penn and beyond.

It is no secret that if you’re looking for a gender-neutral restroom on the University of Pennsylvania campus, they are often difficult to find. The difficulty, of course, depends on your department, the buildings which you occupy, the age of those buildings, and the school that owns those buildings. Ultimately, this is to say, that yes, there are gender-inclusive restrooms throughout campus, but if you happen to be in a building that does not already have one, you may have to go as far as two or three buildings over to locate one. For faculty, staff, and students who do not have to leave their “home” building often or who spend hours in lab, this can pose a substantial inconvenience.

Read more

Interview with Kevin Schott, Director of Engagement for the Eidos LGBTQ+ Health Initiative

By Kay Labella

For Pride Month, the PSPDG blog is collaborating with students from LTBGS and Lambda Grads for a series of posts highlighting the LGBTQ+ community and related matters at Penn and beyond.

Founded in 2022 by Dr. José Bauermeister, the Eidos LGBTQ+ Health Initiative was created to address persistent health disparities facing LGBTQ+ communities. PSPDG, in collaboration with LTBGS, was fortunate enough to interview Kevin Schott, Eidos’ Director of Engagement, about this fantastic partnership that aims to streamline turning academic research into social impact.

Read more

Hitting a Wall: Monetized Scientific Publications and the Potential Shift to Open Access Literature

By Maya Hale

This is the seventh post in our series about how science is communicated and the consequences thereof.

Academia requires publications for success. They can be a condition to defend a PhD thesis (or dissertation), a part of a tenure application, or guidance for research and education. 

Lack of access to primary literature isn’t just restrictive to academics. It also acts as a barrier for aspiring students to break into science. Students submitting papers to the Journal of Emerging Investigators (JEI), a volunteer-run scientific journal for middle and high school students, are often unable to explore certain questions or even make reviewer edits to their submissions because they do not have access to publications on their research topic or the statistical analysis needed to improve publication quality.  

Read more

Science Communication Expectations of Early Career Scientists

By Alexandra Ramirez

This is the sixth post in our series about how science is communicated and the consequences thereof.

Communicating science takes many forms and is practiced by those in all stages of their scientific career. Likely the most common and most recognizable type of science communication is publishing research articles in scientific journals. These articles can be accessed by audiences around the world, both in and outside of the scientific field (though all are not always available through open-access sources). Publications are also increasingly becoming the currency needed for career growth and success in academic research. However, going from project conception to publication of a research article takes years. For those just starting out in research, publishing is a goal that is highly sought after but cannot be immediately accomplished.

Read more

Broken Illusions: When Scientists Fabricate or Falsify Data

By Zeenat Diwan

This is the fifth post in our series about how science is communicated and the consequences thereof.

“Science has the potential to address some of the most important problems in society and for that to happen, scientists have to be trusted by society and they have to be able to trust each others' work. If we are seen as just another special interest group that are doing whatever it takes to advance our careers and that the work is not necessarily reliable, it's tremendously damaging for all of society because we need to be able to rely on science.”

—Ferric Fang quoted by Jha (2012)

Read more

The evolution and impact of scientific preprints in academic communication

By Amanda N. Weiss

This is the fourth post in our series about how science is communicated and the consequences thereof.

Peer-reviewed manuscripts often serve as the primary way to disseminate information within academia. People trust that the information they’re reading is reliable and rigorous, as other experts in the field have already vetted the paper to make sure that it’s of high quality. However, peer review can be a slow process, preventing valuable information from reaching audiences in a timely manner. This can be a barrier in fields that are rapidly advancing, and is especially problematic when information is needed as quickly as possible, as is the case during public health emergencies. 

Read more

Bias in peer review: Who is given a voice in science?

By Clara Raithel

This is the third post in our series about how science is communicated and the consequences thereof.

Whether it is an application for research funding, or a manuscript sent to a scholarly journal for publication, writing is an essential aspect of scientific work. The majority of the resulting output is evaluated by other scientists, in a process referred to as peer review. The ultimate purpose of this evaluation is to ensure the originality, importance, and quality of the academic work before it is executed, or made publicly available. In other words, grants are awarded, and manuscripts are published only when scientific standards are met. As such, peer review represents a critical gatekeeping moment that can define the outcome of entire scientific careers - simply because so much in science depends on the amount of financial resources available, and the number of papers published. However, its consequences reach far beyond an individual’s career: peer review impacts the knowledge produced and shared with the rest of the world – thereby potentially affecting the lives of millions of people.

Read more

Science Civics 101: Re(Building) Trust in Science

This is the second post in our series about how scientific findings are communicated and the consequences thereof.

“Do your own research.” It sounds harmless, even admirable. But dig a little deeper, and you’ll find that this philosophy is a major antagonist in the fight against scientific misinformation.  Anti-vaxxers “did their own research” when they found a Lancet paper (later retracted) that claimed a link between the MMR vaccine and the onset of autism. Despite the paper’s retraction and its claims having no scientific basis, a nationally representative survey conducted in 2019 found that 18% of respondents believed vaccines cause autism. Ivermectin, an anti-parasitic drug most commonly used in veterinary practice, received attention after a few early studies suggested it might prevent and treat COVID-19 infection. It couldn’t have helped when Joe Rogan, currently the most popular podcast host on Spotify, shared “his own research” and personal belief in ivermectin’s efficacy. The research on ivermectin was later retracted by the author or subject to criticism. Still, due to the initial hyperbolic attention, the CDC reported a 24-fold increase in ivermectin prescriptions in August of 2021 compared to the pre-pandemic baseline. This sharp rise in prescriptions was associated with concern from Poison Centers and the FDA about the number of individuals reporting medical complications and hospitalizations following inappropriate ivermectin use. 

Read more