About jpedmd

This author has not yet filled in any details.
So far jpedmd has created 63 blog entries.

Simplifying the Genetic Code

• Carl Zimmer writes in the NY Times (“Scientists Are Learning to Rewrite the Code of Life“) about how scientists are working on simplifying the process of translation by reducing the number of redundant codons in a reconstructed E. coli genome.  This was a major feat of genetic engineering – one that serves as a reminder of how much we still need to learn about how the mechanics and control of DNA an RNA encoding, transcription, and translation work.

2025-08-05T09:48:26-05:00August 5th, 2025|HomeRecommended|

Friedman on Consequences of Trump’s Recent Firings

Tom Friedman writes in his NYT essay The America We Knew Is Rapidly Slipping Away about how truth, justice, and the American way are disappearing. The termination of McEntarfer at the Bureau of Labor Statistics and cyberwarrior Jen Easterly at West Point are particularly galling (the latter after another loony Loomer post).  Friedman quotes Easterly from her response on LinkedIn:

As a lifelong independent, I’ve served our nation in peacetime and combat under Republican and Democratic administrations. I’ve led missions at home and abroad to protect all Americans from vicious terrorists …. I’ve worked my entire career not as a partisan, but as a patriot — not in pursuit of power, but in service to the country I love and in loyalty to the Constitution I swore to protect and defend, against all enemies…Every member of the Long Gray Line knows the Cadet Prayer. It asks that we ‘choose the harder right instead of the easier wrong.’ That line — so simple, yet so powerful — has been my North Star for more than three decades. In boardrooms and war rooms. In quiet moments of doubt and in public acts of leadership. The harder right is never easy. That’s the whole point…To lead in this moment is to believe that with unshakeable certainty, to resist the cynicism that corrodes our institutions, to meet falsehoods with fidelity to truth and adversity with resilience.

This government is losing the best among us.

2025-08-05T09:40:17-05:00August 5th, 2025|Home, Musings|

Trump’s insane war on renewable energy

Matt Yglesias, writing  in his latest Substack post Trump’s insane war on renewable energy:

“All Trump will accomplish by throttling renewables is making costs higher and the air dirtier than if he just let Americans use technologies that really are quite cheap at the current margin. He’s letting culture war prejudice, special interest politics, and polarization get in the way of his stated goals of lower costs and energy dominance.”

2025-07-23T14:16:24-05:00July 23rd, 2025|HomeRecommended|

The head of HHS is either a moron or a liar

• RFK Jr apparently believes (against all evidence to the contrary) that pediatricians profit from vaccine administration. He either mentally resides in an alternate universe or is lying in an attempt to further line his pockets when he thankfully leaves his post at HHS, a post for which he is totally unfit.

2025-07-17T11:59:46-05:00July 17th, 2025|HomeRecommended|

Vladeck on More Unconstitutional Missives from “Justice” Department

• Steve Vladeck dishes on Pam Bondi’s letters to tech companies regarding TikTok and the Protecting Americans from Foreign Adversary Controlled Applications Act in his July 7 One First blog.  An excerpt:

“The tricky part here isn’t that Bondi’s approach is blatantly unconstitutional; it’s that it’s difficult to remedy through litigation. As Rozenshtein has pointed out, it’s not at all clear who might have standing to challenge the letters (or the Trump administration’s broader behavior vis-a-vis TikTok) in court. Perhaps one of TikTok’s competitors could, but there are some fairly obvious political reasons why they might choose not to do so. And so here, again, we come back to what has been the most fundamental breakdown in the separation of powers over the last 5.5 months—the fecklessness of Congress.”

2025-07-07T11:25:10-05:00July 7th, 2025|HomeRecommended|

HCR updates on the admins latest

• Heather Cox Richardson’s  July 6 Letters from an American is worth reading. Some excerpts:

“Brad Plummer of the New York Times noted that the budget reconciliation bill passed by Republicans last week and signed into law on Friday boosts fossil fuels and destroys government efforts to address climate change, even as scientists warn of the acute dangers we face from extreme heat, wildfires, storms, and floods like those in Texas. Scott Dance of the Washington Post added yesterday that the administration has slashed grants for studying climate change and has limited or even ended access to information about climate science, taking down websites and burying reports.”

“On June 30, the medical journal The Lancet published an analysis of the impact of the United States Agency for International Development (USAID) and consequences of its dismantling. The study concluded that from 2001 through 2021, programs funded by USAID prevented nearly 92 million deaths in 133 countries. It estimates that the cuts the Trump administration has made to USAID will result in more than 14 million deaths in the next five years. About 4.5 million will be children under 5.”

2025-07-07T11:15:42-05:00July 7th, 2025|HomeRecommended|

Chatbots can easily create fake news

• Another depressing bit of news regarding how easy it is to create AI chatbots that produce false health information, no coding required:

Modi ND et al. Assessing the system-instruction vulnerabilities of large language models to malicious conversion into health disinformation chatbots. Ann Intern Med 2025 Jun 24; [e-pub]. (https://doi.org/10.7326/ANNALS-24-03933)

“Abstract

Large language models (LLMs) offer substantial promise for improving health care; however, some risks warrant evaluation and discussion. This study assessed the effectiveness of safeguards in foundational LLMs against malicious instruction into health disinformation chatbots. Five foundational LLMs—OpenAI’s GPT-4o, Google’s Gemini 1.5 Pro, Anthropic’s Claude 3.5 Sonnet, Meta’s Llama 3.2-90B Vision, and xAI’s Grok Beta—were evaluated via their application programming interfaces (APIs). Each API received system-level instructions to produce incorrect responses to health queries, delivered in a formal, authoritative, convincing, and scientific tone. Ten health questions were posed to each customized chatbot in duplicate. Exploratory analyses assessed the feasibility of creating a customized generative pretrained transformer (GPT) within the OpenAI GPT Store and searched to identify if any publicly accessible GPTs in the store seemed to respond with disinformation. Of the 100 health queries posed across the 5 customized LLM API chatbots, 88 (88%) responses were health disinformation. Four of the 5 chatbots (GPT-4o, Gemini 1.5 Pro, Llama 3.2-90B Vision, and Grok Beta) generated disinformation in 100% (20 of 20) of their responses, whereas Claude 3.5 Sonnet responded with disinformation in 40% (8 of 20). The disinformation included claimed vaccine–autism links, HIV being airborne, cancer-curing diets, sunscreen risks, genetically modified organism conspiracies, attention deficit–hyperactivity disorder and depression myths, garlic replacing antibiotics, and 5G causing infertility. Exploratory analyses further showed that the OpenAI GPT Store could currently be instructed to generate similar disinformation. Overall, LLM APIs and the OpenAI GPT Store were shown to be vulnerable to malicious system-level instructions to covertly create health disinformation chatbots. These findings highlight the urgent need for robust output screening safeguards to ensure public health safety in an era of rapidly evolving technologies.”
2025-07-04T16:30:24-05:00July 4th, 2025|HomeRecommended|
Go to Top