{"found":49872,"hits":[{"document":{"abstract":"Back in 2010, I wrote about early artistic depictions of Brachiosaurus (including Giraffatitan). There, I wrote of the iconic mount MB.R.2181 (then HMN S II): When the mount was completed, shortly before the start of World War II, it was unveiled against a backdrop of Nazi banners.","archive_url":null,"authors":[{"affiliation":[{"id":"https://ror.org/0524sp257","name":"University of Bristol"}],"contributor_roles":[],"family":"Taylor","given":"Mike","url":"https://orcid.org/0000-0002-1003-5675"}],"blog":{"archive_collection":22153,"archive_host":null,"archive_prefix":"https://wayback.archive-it.org/22153/20231105213934/","archive_timestamps":null,"authors":[{"name":"Mike Taylor"}],"canonical_url":null,"category":"earthAndRelatedEnvironmentalSciences","community_id":"0e13541f-417e-46c0-a859-65927249df72","created_at":1675209600,"current_feed_url":null,"description":"SV-POW!  ...  All sauropod vertebrae, except when we're talking about Open Access. ISSN 3033-3695","doi_as_guid":false,"favicon":null,"feed_format":"application/atom+xml","feed_url":"https://svpow.com/feed/atom/","filter":null,"funding":null,"generator":"WordPress.com","generator_raw":"WordPress.com","home_page_url":"https://svpow.com","id":"c6cbbd2e-4675-4680-8a3f-784388009821","indexed":false,"issn":"3033-3695","language":"en","license":"https://creativecommons.org/licenses/by/4.0/legalcode","mastodon":null,"prefix":"10.59350","registered_at":1729882329,"relative_url":null,"ror":null,"secure":true,"slug":"svpow","status":"active","subfield":"1911","subfield_validated":true,"title":"Sauropod Vertebra Picture of the Week","updated_at":1775289120.302266,"use_api":true,"use_mastodon":false,"user_id":"04d03585-c8bb-40f2-9619-5076a5e0aed2"},"blog_name":"Sauropod Vertebra Picture of the Week","blog_slug":"svpow","content_html":"<p>Back in 2010, I wrote about <a href=\"https://svpow.com/2010/04/08/early-brachiosaurus-art/\">early artistic depictions of <em>Brachiosaurus</em> (including <em>Giraffatitan</em>)</a>. There, I wrote of the iconic mount MB.R.2181 (then HMN S II):</p>\n<blockquote><p>When the mount was completed, shortly before the start of World War II, it was unveiled against a backdrop of Nazi banners. I have not been able to find a photograph of this (and if anyone has one, please do let me know), but I do have this drawing of the event, taken from an Italian magazine and dated 23rd December 1937.</p></blockquote>\n<p>(See that post for the drawing.)</p>\n<p>Recently the historian Ilja Nieuwland (one of the authors <a href=\"https://svpow.com/papers-by-sv-powsketeers/taylor-et-al-2025-on-the-composition-on-the-carnegie-diplodocus/\">on our recent paper on the Carnegie <em>Diplodocus</em></a>, Taylor et al. 2025) sent me two photos of this unveiling, again with swastikas prominent in the background:</p>\n<div data-shortcode=\"caption\" id=\"attachment_25273\" style=\"width: 490px\" class=\"wp-caption alignnone\"><a href=\"https://svpow.wordpress.com/wp-content/uploads/2026/04/haagsche-courant-1937-brachio.jpeg\"><img aria-describedby=\"caption-attachment-25273\" data-attachment-id=\"25273\" data-permalink=\"http://svpow.com/2026/04/03/the-nazi-sauropod-giraffatitan-brachiosaurus-brancai-in-1937/haagsche-courant-1937-brachio/\" data-orig-file=\"https://svpow.wordpress.com/wp-content/uploads/2026/04/haagsche-courant-1937-brachio.jpeg\" data-orig-size=\"1398,2217\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;,&quot;alt&quot;:&quot;&quot;}\" data-image-title=\"Haagsche Courant 1937 &amp;#8211; Brachio\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https://svpow.wordpress.com/wp-content/uploads/2026/04/haagsche-courant-1937-brachio.jpeg?w=646\" loading=\"lazy\" class=\"size-full wp-image-25273\" src=\"https://svpow.wordpress.com/wp-content/uploads/2026/04/haagsche-courant-1937-brachio.jpeg\" alt=\"\" width=\"480\" height=\"761\" srcset=\"https://svpow.wordpress.com/wp-content/uploads/2026/04/haagsche-courant-1937-brachio.jpeg?w=480&amp;h=761 480w, https://svpow.wordpress.com/wp-content/uploads/2026/04/haagsche-courant-1937-brachio.jpeg?w=960&amp;h=1522 960w, https://svpow.wordpress.com/wp-content/uploads/2026/04/haagsche-courant-1937-brachio.jpeg?w=95&amp;h=150 95w, https://svpow.wordpress.com/wp-content/uploads/2026/04/haagsche-courant-1937-brachio.jpeg?w=189&amp;h=300 189w, https://svpow.wordpress.com/wp-content/uploads/2026/04/haagsche-courant-1937-brachio.jpeg?w=768&amp;h=1218 768w, https://svpow.wordpress.com/wp-content/uploads/2026/04/haagsche-courant-1937-brachio.jpeg?w=646&amp;h=1024 646w\" sizes=\"(max-width: 480px) 100vw, 480px\" /></a><p id=\"caption-attachment-25273\" class=\"wp-caption-text\"><strong>EEN MOOIE AANSWINST</strong> \u2014 voor het museum van natuurlijke historie te Berlijn: het skelet van een Brachiosaurus, het grooste voorwereld-lijke landdier ooit gevonden. Het skelet is 11.87 meter hoog.</p></div>\n<p>Surprisingly, perhaps, this is in a Dutch newspaper, <em>Haagsche Courant</em> of 14 December 1937. The caption, which is in Dutch, reads: &#8220;A GREAT ADDITION \u2014 to the Museum of Natural History in Berlin: the skeleton of a Brachiosaurus, the largest prehistoric land animal ever found. The skeleton is 11.87 meters tall.&#8221; Ilja helpfully supplied <a href=\"https://svpow.wordpress.com/wp-content/uploads/2026/04/haagsche-courant-1937-brachio.pdf\">a PDF containing the front page of the newspaper and the page that contained this image</a>.</p>\n<p>The second is similar, but from a different angle that highlights the human skeleton that was placed down by the forefeet for scale:</p>\n<div data-shortcode=\"caption\" id=\"attachment_25277\" style=\"width: 490px\" class=\"wp-caption alignnone\"><a href=\"https://svpow.wordpress.com/wp-content/uploads/2026/04/maasbode-27-nov-1937-p2.jpeg\"><img aria-describedby=\"caption-attachment-25277\" data-attachment-id=\"25277\" data-permalink=\"http://svpow.com/2026/04/03/the-nazi-sauropod-giraffatitan-brachiosaurus-brancai-in-1937/maasbode-27-nov-1937-p2/\" data-orig-file=\"https://svpow.wordpress.com/wp-content/uploads/2026/04/maasbode-27-nov-1937-p2.jpeg\" data-orig-size=\"678,1280\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;,&quot;alt&quot;:&quot;&quot;}\" data-image-title=\"Maasbode 27 nov 1937-p2\" data-image-description=\"\" data-image-caption=\"&lt;p&gt;EEN PRAEHISTORISCH MONSTER werd ongeveer zeven jaar geleden door een Duitsch geleerde in Oost-Africa ontdekt. Na moeizamen arbeid is men er in geslaagd het skelet van den brachiosaurus op te bouwen, dat in &amp;#8216;n museum te Berlijn is opgesteld&lt;/p&gt;\n\" data-large-file=\"https://svpow.wordpress.com/wp-content/uploads/2026/04/maasbode-27-nov-1937-p2.jpeg?w=542\" loading=\"lazy\" class=\"size-full wp-image-25277\" src=\"https://svpow.wordpress.com/wp-content/uploads/2026/04/maasbode-27-nov-1937-p2.jpeg\" alt=\"\" width=\"480\" height=\"906\" srcset=\"https://svpow.wordpress.com/wp-content/uploads/2026/04/maasbode-27-nov-1937-p2.jpeg?w=480&amp;h=906 480w, https://svpow.wordpress.com/wp-content/uploads/2026/04/maasbode-27-nov-1937-p2.jpeg?w=79&amp;h=150 79w, https://svpow.wordpress.com/wp-content/uploads/2026/04/maasbode-27-nov-1937-p2.jpeg?w=159&amp;h=300 159w, https://svpow.wordpress.com/wp-content/uploads/2026/04/maasbode-27-nov-1937-p2.jpeg 678w\" sizes=\"(max-width: 480px) 100vw, 480px\" /></a><p id=\"caption-attachment-25277\" class=\"wp-caption-text\">EEN PRAEHISTORISCH MONSTER werd ongeveer zeven jaar geleden door een Duitsch geleerde in Oost-Africa ontdekt. Na moeizamen arbeid is men er in geslaagd het skelet van den brachiosaurus op te bouwen, dat in &#8216;n museum te Berlijn is opgesteld</p></div>\n<p>Again, this is in Dutch, and the filename suggests that the source is a newspaper called <em>Maasbode</em> for 27 November 1937. The caption reads: &#8220;A PREHISTORIC MONSTER was discovered about seven years ago by a German scientist in East Africa. After arduous work, they succeeded in reconstructing the skeleton of the brachiosaurus, which is on display in a museum in Berlin.&#8221;</p>\n<p>I don&#8217;t know about you, but I feel it as a gut-punch when I see this animal, <a href=\"https://svpow.com/2024/11/17/behold-the-glory-of-the-lego-giraffatitan/\">which I deeply love</a>, against a backdrop of Nazi symbols. Gerhard Maier&#8217;s usually very detailed book <em>African Dinosaurs Unearthed</em> (Maier 2003) is uncharacteristically terse about this, saying of the unveiling only this (on page 267):</p>\n<blockquote><p>With swastika banners hanging from the walls as a backdrop, the exciting new exhibit opened in August 1937. A curious public, especially schoolchildren, formed long lines, waiting to see Berlin&#8217;s latest attraction.</p></blockquote>\n<p>I don&#8217;t know to what extent the rising Nazi regime used the brachiosaur mount as a PR event, an advertisement for their national superiority or what have you. (Has anyone written about this?)</p>\n<p>I was thinking about this because I get a daily notification of Wikipedia&#8217;s most-viewed article of the previous 24 hours. In recent times, it&#8217;s mostly been some article about bad news, or a person causing bad news. But a couple of days ago, it was <a href=\"https://en.wikipedia.org/wiki/Artemis_II\">Artemis II</a>, and I remarked on Mastodon how nice it was, just for one day, to have good news as the most read article. And someone quickly replied &#8220;I love space exploration, but having the Trump administration take credit for something like this is the last thing we need.&#8221;</p>\n<p>But here&#8217;s the thing. The Berlin brachiosaur mount has long outlived the Nazis (or at least the OG Nazis). And whatever the current moon mission achieves will long outlive the Trump administration.</p>\n<p>We don&#8217;t really write about politics on this blog. I like that about it, and I&#8217;m guessing most readers do as well. I&#8217;m not going to change that \u2014 the Web is\u00a0<em>full</em> of places to go and read about politics. But I do like the sense that scientific achievements are outside of the particular people who happen to be in power when they happen. The Berlin brachiosaur, and the Artemis II moon mission, are achievements for all humankind.</p>\n<h1>References</h1>\n<ul>\n<li>Maier, Gerhard. 2003. <em>African Dinosaurs Unearthed: The Tendaguru Expeditions</em>. Indiana University Press, Bloomington and Indianapolis, 380 p.</li>\n<li><a href=\"https://www.miketaylor.org.uk/dino/pubs/taylor-et-al-2025/TaylorEtAl2025--history-and-composition-of-the-Carnegie-Diplodocus.pdf\">Taylor, Michael P., Amy C. Henrici, Linsly J. Church, Ilja Nieuwland and Matthew C. Lamanna. 2025. <em>The history and composition of the Carnegie </em>Diplodocus. <em>Annals of the Carnegie Museum</em> <strong>91(1)</strong>:55\u201391. doi:10.2992/007.091.0104</a></li>\n</ul>\n<p>&nbsp;</p>\n<hr />\n<p><a href=\"https://doi.org/10.59350/9d5gk-fm764\">doi:10.59350/9d5gk-fm764</a></p>\n","doi":"https://doi.org/10.59350/9d5gk-fm764","funding_references":null,"guid":"https://svpow.com/?p=25267","id":"108db357-8eeb-461e-91b1-1bc0f0e1131f","image":"https://svpow.wordpress.com/wp-content/uploads/2026/04/haagsche-courant-1937-brachio.jpeg","indexed":true,"indexed_at":1775230822,"language":"en","parent_doi":null,"published_at":1775225594,"reference":[{"unstructured":"Maier, Gerhard. 2003. African Dinosaurs Unearthed: The Tendaguru Expeditions. Indiana University Press, Bloomington and Indianapolis, 380 p."},{"id":"https://www.miketaylor.org.uk/dino/pubs/taylor-et-al-2025/TaylorEtAl2025--history-and-composition-of-the-Carnegie-Diplodocus.pdf","unstructured":"Taylor, Michael P., Amy C. Henrici, Linsly J. Church, Ilja Nieuwland and Matthew C. Lamanna. 2025. The history and composition of the Carnegie Diplodocus. Annals of the Carnegie Museum 91(1):55\u201391. https://doi.org/10.2992/007.091.0104"}],"registered_at":0,"relationships":[],"rid":"ya3r2-3sb74","status":"active","summary":"Back in 2010, I wrote about early artistic depictions of\n<em>\n Brachiosaurus\n</em>\n(including\n<em>\n Giraffatitan\n</em>\n). There, I wrote of the iconic mount MB.R.2181 (then HMN S II):  (See that post for the drawing.)  Recently the historian Ilja Nieuwland (one of the authors on our recent paper on the Carnegie\n<em>\n Diplodocus\n</em>\n, Taylor et al. 2025) sent me two photos of this unveiling, again with swastikas prominent in the background:\n<strong>\n EEN\n</strong>","tags":["Brachiosaurids","Giraffatitan","History"],"title":"The Nazi sauropod \u2014 <i>Giraffatitan</i> (= \u201c<i>Brachiosaurus</i>\u201c) <i>brancai</i> in 1937","updated_at":1775227439,"url":"https://svpow.com/2026/04/03/the-nazi-sauropod-giraffatitan-brachiosaurus-brancai-in-1937/","version":"v1"}},{"document":{"abstract":"Unsere monatliche Rubrik zu aktuellen Veranstaltungen rund um Open Research.","archive_url":null,"authors":[{"contributor_roles":[],"family":"Fischer","given":"Georg","url":"https://orcid.org/0000-0001-5620-5759"}],"blog":{"archive_collection":22141,"archive_host":null,"archive_prefix":"https://wayback.archive-it.org/22141/20231105110201/","archive_timestamps":[20231105110201,20240505180741,20241105110207,20250505110216],"authors":null,"canonical_url":null,"category":"otherSocialSciences","community_id":"52aefd81-f405-4349-b080-754395a5d8b2","created_at":1694476800,"current_feed_url":null,"description":null,"doi_as_guid":false,"favicon":"https://rogue-scholar.org/api/communities/52aefd81-f405-4349-b080-754395a5d8b2/logo","feed_format":"application/atom+xml","feed_url":"https://blogs.fu-berlin.de/open-research-berlin/feed/atom/","filter":null,"funding":null,"generator":"WordPress","generator_raw":"WordPress 6.0","home_page_url":"https://blogs.fu-berlin.de/open-research-berlin/","id":"575d6b2d-c555-4fc7-99fb-055a400f9163","indexed":false,"issn":null,"language":"de","license":"https://creativecommons.org/licenses/by/4.0/legalcode","mastodon":"https://berlin.social/@openaccess","prefix":"10.59350","registered_at":1729602098,"relative_url":null,"ror":null,"secure":true,"slug":"oaberlin","status":"active","subfield":"1802","subfield_validated":null,"title":"Open Research Office Berlin","updated_at":1775289050.194723,"use_api":true,"use_mastodon":true,"user_id":"383c62ed-0cf6-4dc7-a56c-5b0104f7f10a"},"blog_name":"Open Research Office Berlin","blog_slug":"oaberlin","content_html":"<p>Unsere monatliche Rubrik zu aktuellen Veranstaltungen rund um Open Research.</p>\n<p><!--more--></p>\n<pre>Anmerkung zu dieser Rubrik: Das Open Research Office Berlin erstellt monatlich eine \u00dcbersicht \u00fcber Termine und Veranstaltungen zu Open Access und Open Research in Berlin bzw. an Berliner Einrichtungen. Der Fokus liegt dabei auf unseren Partnereinrichtungen und auf Veranstaltungen, die sich an die \u00d6ffentlichkeit richten bzw. die offen sind f\u00fcr Angeh\u00f6rige der Wissenschafts- und Kulturerbeeinrichtungen in Berlin. Wir erg\u00e4nzen diese Liste gerne (Info bitte via <a href=\"mailto:team@open-research-berlin.de\">Mail</a> ans OROB).</pre>\n<h2>31. M\u00e4rz, Webarchivierung f\u00fcr viele: Expertise und Infrastruktur gemeinschaftlich aufbauen, Berlin</h2>\n<p><em>Jeden Tag geht ein Teil unseres digitalen Kulturerbes unwiederbringlich verloren \u2013 Netzliteratur, Websites, Social-Media-Beitr\u00e4ge und viele weitere Online-Inhalte verschwinden, ohne dass wir es bemerken. Dabei gibt es l\u00e4ngst Wege, dieses Erbe zu bewahren: Gemeinsam mit den Expert:innen Claus-Michael Schlesinger und Mona Ulrich hat die Zentral- und Landesbibliothek Berlin (ZLB) in den letzten zwei Jahren Workshops zu den Tools von Webrecorder veranstaltet, mit denen man Webseiten archivieren kann. Um diese Tools f\u00fcr umf\u00e4ngliche Archivierungsvorhaben zu nutzen, braucht es Ressourcen \u2013 zum Beispiel IT-Ressourcen, die nur sehr wenigen Institutionen zur Verf\u00fcgung stehen. Workshop-Teilnehmer:innen aus kleineren Institutionen und Projekten fragten sich daher immer wieder, wie sie sie langfristig nutzen k\u00f6nnen.</em></p>\n<ul>\n<li><strong>Termin: </strong>31.03.2026, 16:00 bis 18:00 Uhr, Technologiestiftung Berlin, 4. Etage, Grunewaldstr. 61-62, 10825 Berlin</li>\n<li><strong>Organisiert von</strong>: kulturBdigital</li>\n<li>[<a href=\"https://www.kultur-b-digital.de/webarchivierung-fuer-viele-expertise-und-infrastruktur-gemeinschaftlich-aufbauen/\">Information und Anmeldung</a>]</li>\n</ul>\n<h2>13. April, Machine-Learning-Montag I: What the Hype? Eine Einf\u00fchrung in die Grundlagen des maschinellen Lernens f\u00fcr Kulturerbeinstitutionen, online</h2>\n<p><em>Maschinelles Lernen (ML) oder auch \u201eK\u00fcnstliche Intelligenz\u201c (KI) ist weiterhin das gro\u00dfe Thema in fast allen Bereichen des menschlichen Arbeitens. Aber was offerieren diese Werkzeuge abseits des gro\u00dfen Hypes von \u201eschneller, gr\u00f6\u00dfer, besser, einfacher und sch\u00f6ner\u201c und dem damit prognostizierten Durchdringen aller Lebensbereiche?\u00a0Diese digiS-Einf\u00fchrung hat zum Ziel, Nicht-Expert:innen im maschinellen Lernen das n\u00f6tige Hintergrundwissen zu vermitteln, um sich in diesem Diskurs zurechtzufinden und Hype von sinnvoller Anwendung unterscheiden zu k\u00f6nnen.</em></p>\n<ul>\n<li><strong>Termin: </strong>13.04.2026, 10:00 bis 12:30 Uhr</li>\n<li><strong>Organisiert von</strong>: digiS; Referent*innen: Xenia Kitaeva und Marco Klindt (digiS)</li>\n<li>[<a href=\"https://www.digis-berlin.de/machine-learning-montag-am-13-april-what-the-hype/\">Information und Anmeldung</a>]</li>\n</ul>\n<h2>14. April, FDM@BUA: Offboarding Template als Grundlage f\u00fcr Daten- und Wissens\u00fcbergabe in Projekten, online</h2>\n<p><em>Dr. Stefanie Seltmann, Research Data Steward am Berlin Institute of Health, stellt vor, wie sich der Transfer von Forschungsdaten und projektbezogenem Wissen beim Ausscheiden von Projektmitgliedern systematisch gestalten l\u00e4sst.\u00a0Im Mittelpunkt steht ein entwickeltes Offboarding-Template, das als strukturierte Grundlage f\u00fcr Daten- und Wissens\u00fcbergabe dient. Ziel ist es, die Kontinuit\u00e4t in Forschungsprojekten zu sichern, die Qualit\u00e4t der Dokumentation zu verbessern und das Risiko von Datenverlusten zu reduzieren. Das Template ist so konzipiert, dass es flexibel an unterschiedliche Forschungskontexte angepasst und in bestehende institutionelle FDM-Prozesse integriert werden kann.</em></p>\n<ul>\n<li><strong>Termin: </strong>14.04.2026, 10:00 bis 11:30 Uhr, online via Webex</li>\n<li><strong>Organisiert von</strong>: Berlin University Alliance</li>\n<li>[<a href=\"https://www.berlin-university-alliance.de/commitments/sharing-resources/shared-resources-center/CARDS-FDM/cards_events/2026-04-14_offboarding.html\">Information und Anmeldung</a>]</li>\n</ul>\n<h2>15. April, Datenmanagementpl\u00e4ne und der RDMO-Service von NFDI4Culture, online</h2>\n<p><em>Sie sind digital k\u00fcnstlerisch oder gestalterisch t\u00e4tig und wollen die bei Ihrer Arbeit anfallenden Daten so managen, dass andere damit arbeiten k\u00f6nnen? Sie sind eine Hochschuleinrichtung, die Daten aus studentischen Arbeiten oder wissenschaftlichen Projekten im Bereich der K\u00fcnste entgegennimmt?\u00a0Der Research Data Management Organiser (RDMO) ist ein flexibles und kostenfreies Werkzeug, das Sie beim Management Ihrer Daten und bei der Planung von digitalen Projekten aller Art unterst\u00fctzen kann.</em></p>\n<ul>\n<li><strong>Termin: </strong>15.04.2026, 15:00 bis 17:00 Uhr, online via Webex</li>\n<li><strong>Organisiert von</strong>: Fokusgruppe OA-K\u00fcnste, open-access.network</li>\n<li>[<a href=\"https://open-access.network/vernetzen/digitale-fokusgruppen/fokusgruppe-oa-kuenste#c28672\">Information</a>]</li>\n</ul>\n<h2>16.-30. April, Open Science Hardware Workshops, TU Berlin</h2>\n<p><em>Open Science Hardware (OSH) enables researchers to design, prototype, document, and share custom research tools in a transparent and reproducible way. It is often facilitated by the use of digital manufacturing, which combines computer aided design and computer aided manufacturing software with machines like 3d printers, laser cutter and CNC milling machines.\u00a0In April, several introductory workshops will invite life science researchers and technical staff including the Neurosciene community to explore how digital fabrication and structured documentation can strengthen research practice \u2014 from cost-efficient prototyping, publishable hardware to the strengthening of research communities. No prior experience required.</em></p>\n<ul>\n<li><strong>Termin: </strong>16. bis 30.04.2026, Universit\u00e4tsbibliothek der TU Berlin bzw. Campus der Humboldt-Universit\u00e4t zu Berlin</li>\n<li><strong>Organisiert von</strong>: Berlin University Alliance</li>\n<li>[<a href=\"https://events.tu-berlin.de/de/events/019d2fd3-e17f-73fa-be53-5f672d77b504?scopeFilter%5Bpublicly_visible%5D=true&amp;scopeFilter%5Bhidden_in_lists%5D=false&amp;scopeFilter%5Bended%5D=false&amp;page%5Bnumber%5D=1&amp;page%5Bsize%5D=50&amp;page%5Btotal%5D=9&amp;sort%5B0%5D=-pinned&amp;sort%5B1%5D=start_at&amp;sort%5B2%5D=title\">Information und Anmeldung</a>]</li>\n</ul>\n<h2>20. April, Workshop Open Access in und f\u00fcr Museen, Europa-Universit\u00e4t Frankfurt/Oder</h2>\n<p><em>Anhand von mehreren Anwendungsf\u00e4llen wollen wir kooperative Ans\u00e4tze f\u00fcr Open Access und Open Culture an der Schnittstelle von Kultureinrichtungen, Hochschulen und Open-Access-Publikationsunterst\u00fctzungsinfrastrukturen explorieren und die Entwicklung eines konzeptionellen Rahmens f\u00fcr m\u00f6gliche L\u00f6sungen vorbereiten.\u00a0Die Veranstaltung richtet sich an in diesen Bereichen t\u00e4tigen Professionals.</em></p>\n<ul>\n<li><strong>Termin: </strong>20.04.2026, Europa-Universit\u00e4t Frankfurt/Oder</li>\n<li><strong>Organisiert von</strong>: Europa-Universit\u00e4t Viadrina, Stiftung Kleist-Museum Frankfurt (Oder) und Vernetzungs- und Kompetenzstelle Open Access Brandenburg (VuK)</li>\n<li>[<a href=\"https://open-access-brandenburg.de/workshop-open-access-in-und-fuer-museen-euv_2026/\">Information und Anmeldung</a>]</li>\n</ul>\n<h2>20. April, Wikidata f\u00fcr die Sammlungserschlie\u00dfung, online</h2>\n<p><em><a href=\"https://www.wikidata.org/wiki/Wikidata:Main_Page\">Wikidata</a> ist ein gro\u00dfer, generischer, offener, frei editierbarer Wissensgraph, der Informationen buchst\u00e4blich \u00fcber Gott (<a href=\"http://www.wikidata.org/entity/Q190\">Q190</a>) und die Welt (<a href=\"http://www.wikidata.org/entity/Q2\">Q2</a>) vorh\u00e4lt \u2013 sowie \u00fcber mehr als 120 Millionen andere Entit\u00e4ten (<a href=\"https://www.wikidata.org/wiki/Wikidata:Statistics\">https://www.wikidata.org/wiki/Wikidata:Statistics</a>). F\u00fcr GLAM-Einrichtungen ist das Potential von Wikidata erheblich: In Wikidata lassen sich Informationen zu Objekten, Personen, Orten, Bauwerken und vielem mehr pflegen, und es k\u00f6nnen bei Bedarf neue Datens\u00e4tze erstellt werden. Wikidata ist somit als flexibler ad-hoc-Normdatengenerator eine optimale Erg\u00e4nzung zur Gemeinsamen Normdatei (GND). [&#8230;]\u00a0\u00dcber all diese Dinge werden wir im digiS-Workshop \u201eWikidata f\u00fcr die Sammlungserschlie\u00dfung\u201c sprechen, um auf diese Weise das Potenzial von Wikidata f\u00fcr GLAM-Institutionen und speziell f\u00fcr die Sammlungsdokumentation genauer in den Blick zu nehmen. Selbstverst\u00e4ndlich wird es Raum f\u00fcr Fragen und Diskussionen geben, eine konkrete Einf\u00fchrung in die praktische Arbeit mit Wikidata und den angesprochenen Tools ist f\u00fcr diese Veranstaltung jedoch nicht vorgesehen.</em></p>\n<ul>\n<li><strong>Termin: </strong>20.04.2026, 10:00 bis 11:30 Uhr, online</li>\n<li><strong>Organisiert von</strong>: digiS; Referent: Alexander Winkler (digiS)</li>\n<li>[<a href=\"https://www.digis-berlin.de/workshop-wikidata-fuer-die-sammlungserschliessung-am-20-04/\">Information und Anmeldung</a>]</li>\n</ul>\n<h2>22.-23. April, FDM@BUA Workshop &#8222;Train-the-Trainer Forschungsdatenmanagement&#8220;, FU Berlin</h2>\n<div class=\"editor-content box-event-doc-abstract\">\n<p><em>Kompetenzen im Umgang mit Forschungsdaten sind eine zentrale Grundvoraussetzung f\u00fcr moderne Wissenschaft: Ohne eine gute Dokumentation und Nachhaltung gibt es keine FAIR (Findable, Accessible, Interoperable, Re-usable) Daten. Um diese Kompetenzen an Forschende in vielen F\u00e4chern und Institutionen der Berlin University Alliance zu vermitteln, braucht es ausgebildete Trainer*innen. Das Projekt\u00a0<a href=\"https://www.berlin-university-alliance.de/commitments/sharing-resources/shared-resources-center/CARDS-FDM/index.html\">Collaboratively Advancing Research Data Support</a><a href=\"https://www.berlin-university-alliance.de/commitments/sharing-resources/shared-resources-center/CARDS-FDM/index.html\">(CARDS)</a>bietet daher im April 2026 einen\u00a0<a href=\"https://rti-studio.com/train-the-trainer-workshop-zum-thema-forschungsdatenmanagement/\">Train-the-Trainer Workshop</a>\u00a0zu Forschungsdatenmanagement mit\u00a0<a href=\"https://rti-studio.com/ueber-mich/\">Dr. Katarzyna Biernacka</a>\u00a0an.\u00a0Nach dem zweit\u00e4gigen Workshop werden die Teilnehmenden \u00fcber die notwendigen F\u00e4higkeiten verf\u00fcgen, um eigene Trainings und Beratungen zum Forschungsdatenmanagement in ihrer Einrichtung durchzuf\u00fchren.</em></p>\n</div>\n<ul>\n<li><strong>Termin: </strong>22-23.04.2026, Rostlaube an der Freien Universit\u00e4t Berlin</li>\n<li><strong>Organisiert von</strong>: Berlin University Alliance; Referentin: Katarzyna Biernacka</li>\n<li>[<a href=\"https://www.fu-berlin.de/sites/forschungsdatenmanagement/veranstaltungen/2026/2026-04-22-23-FDMatBUA-Workshop-T-t-T-en-KB.html\">Information und Anmeldung</a>]</li>\n</ul>\n<h2>23. April, Magnifying Open Science: Insights from the BUA Participatory Research Map, online</h2>\n<p><em>Open Engagement with societal stakeholders is one of the four pillars of the UNESCO Recommendation on Open Science. The Berlin University Alliance Participatory Research Map maps over 90 projects in which researchers collaborate with societal stakeholders. With the Participatory Research Map, we not only want to increase the visibility of participatory research but also explore how different stakeholders and research modes contribute to open science and open knowledge generation.\u00a0In this event, we will present the results of our analysis and discuss with participants how we can collaboratively contribute to magnifying openness in engaging with societal stakeholders.</em></p>\n<ul>\n<li><strong>Termin: </strong>23.04.2026, online</li>\n<li><strong>Organisiert von</strong>: BUA funded project &#8222;Magnifying Open Science&#8220; (Open Research Office Berlin)</li>\n<li>[<a href=\"https://blogs.fu-berlin.de/open-research-berlin/2025/12/18/save-the-date-for-online-event-series-magnifying-open-science/\">Information und Anmeldung</a>]</li>\n</ul>\n<h2>27. April, Machine Learning Montag II: KI und Recht f\u00fcr Kulturerbe-Einrichtungen &#8211; Vortrag und Q&amp;A, online</h2>\n<p><em> F\u00fcr viele Kulturerbe-Einrichtungen stellt sich die Frage, wie der Einsatz von KI in unterschiedlichen Konstellationen rechtlich zu bewerten ist. Da bei der rechtlichen Bewertung noch viele Unsicherheiten bestehen, soll dieser Workshop den aktuellen Stand der Rechtsprechung sowie auch der Gesetzgebung in Hinblick auf KI erl\u00e4utern. Darauf aufbauend wird die Rechtslage bei verschiedenen Anwendungsbereichen in Kulturerbe-Einrichtungen untersucht.</em></p>\n<ul>\n<li><strong>Termin: </strong>27.04.2026, 10:00 bis 12:30 Uhr, online via Zoom</li>\n<li><strong>Organisiert von</strong>: digiS; Referent: Paul Klimpel (iRights.Law)</li>\n<li>[<a href=\"https://www.digis-berlin.de/machine-learning-montag-ii-am-27-april-ki-und-recht-fuer-kulturerbe-einrichtungen-vortrag-und-qa/\">Information und Anmeldung</a>]</li>\n</ul>\n<h2>29. April, Workshop Research Data Management in a nutshell, online</h2>\n<p><em>Almost every research project generates or collects digital research data. Researchers face the challenge of not only managing and documenting the data, but also preserving it and making it available for reuse. This online seminar offers a general introduction to essential aspects of research data management.</em></p>\n<ul>\n<li><strong>Termin: </strong>29.04.2026, 09:30 bis 12:00 Uhr, online</li>\n<li><strong>Organisiert von</strong>: Freie Universit\u00e4t Berlin</li>\n<li>[<a href=\"https://www.fu-berlin.de/sites/forschungsdatenmanagement/veranstaltungen/2026/2026-04-29-Workshop-RDM-in-a-nutshell-en-DM.html\">Information und Anmeldung</a>]</li>\n</ul>\n<h2>30. April, #UPDATE BIB: Open Access zu wissenschaftlichen Publikationen &#8211; Aktuelle Herausforderungen f\u00fcr Bibliotheken, online</h2>\n<p><em>Das Seminar bietet eine \u00fcbersichtliche Einf\u00fchrung in den Stand von Open Access an Bibliotheken und stellt die wichtigsten aktuellen Rahmenbedingungen und Entwicklungen vor. Die Teilnehmer*innen lernen die Grundbegriffe von Open Access kennen und verstehen die technischen, rechtlichen und politischen Rahmenbedingungen freier Verf\u00fcgbarkeit von wissenschaftlichen Publikationen. Die Entwicklungen zu Open Access werden im mit Blick auf verschiedene bibliothekarische Handlungsfelder kontextualisiert, wie Erwerbung/Zugang, Informationskompetenz, Forschungsunterst\u00fctzung, technische Infrastrukturen.</em></p>\n<ul>\n<li><strong>Termin: </strong>30.04.2026, 10:00 bis 12:30 Uhr, online</li>\n<li><strong>Organisiert von</strong>: FU Berlin; Referentin: Christina Riesenweber (HU Berlin)</li>\n<li>[<a href=\"https://veranstaltung.weiterbildung.fu-berlin.de/Veranstaltung/cmx64801e98a27ed.html\">Information und Anmeldung</a>]</li>\n</ul>\n<h2>30. April, Open Access meets KI \u2013 L\u00f6sungsans\u00e4tze durch CC-Signals, online</h2>\n<p><em>Um <a href=\"https://creativecommons.org/2025/06/25/introducing-cc-signals-a-new-social-contract-for-the-age-of-ai/\">\u201eoffenes Wissen zu bewahren, [\u2026 und] verantwortungsbewusstes KI-Verhalten [zu] f\u00f6rdern, ohne dabei Innovationen einzuschr\u00e4nken\u201c</a>, hat Creative Commons vor kurzem ein neues Modell vorgestellt: CC Signals. Rechteinhaber*innen sollen so die M\u00f6glichkeit haben, zu signalisieren, unter welchen Voraussetzungen ihre Inhalte von KI-Systemen genutzt werden d\u00fcrfen.\u00a0In unserem n\u00e4chsten ENABLE!-Werkstatt-Gespr\u00e4ch wollen wir uns CC Signals n\u00e4her ansehen und mit unseren Referent*innen diskutieren, wie dieses Modell funktioniert und was wir davon erwarten k\u00f6nnen.</em></p>\n<ul>\n<li><strong>Termin: </strong>30.04.2026, 16:00 bis 17:00 Uhr, online</li>\n<li><strong>Organisiert von</strong>: ENABLE! Community</li>\n<li>[<a href=\"https://enable-oa.org/\">Information</a>]</li>\n</ul>\n<p>weiter zu Mai 2026 [folgt in K\u00fcrze]</p>\n","doi":"https://doi.org/10.59350/s4xat-69z93","funding_references":null,"guid":"https://blogs.fu-berlin.de/open-research-berlin/?p=4021","id":"6a3635b0-a652-448e-addb-627b5bf812d3","image":null,"indexed":true,"indexed_at":1775206819,"language":"de","parent_doi":null,"published_at":1775206767,"reference":[],"registered_at":0,"relationships":[],"rid":"vtt21-qgh66","status":"active","summary":"Unsere monatliche Rubrik zu aktuellen Veranstaltungen rund um Open Research. Anmerkung zu dieser Rubrik: Das Open Research Office Berlin erstellt monatlich eine \u00dcbersicht \u00fcber Termine und Veranstaltungen zu Open Access und Open Research in Berlin bzw. an Berliner Einrichtungen. Der Fokus liegt dabei auf unseren Partnereinrichtungen und auf Veranstaltungen, die sich an die \u00d6ffentlichkeit richten bzw.","tags":["Veranstaltungshinweise"],"title":"Veranstaltungshinweise April 2026","updated_at":1775206767,"url":"https://blogs.fu-berlin.de/open-research-berlin/2026/04/03/veranstaltungshinweise-april-2026/","version":"v1"}},{"document":{"abstract":"I am writing this blog with a heavy heart.\u00a0 After 21 years and 2,000 blogs I have taken the decision to \u2018rest\u2019 the website after Easter.\u00a0 My reasons are varied.","archive_url":null,"authors":[{"contributor_roles":[],"family":"Akass","given":"Kim"}],"blog":{"archive_collection":null,"archive_host":null,"archive_prefix":null,"archive_timestamps":null,"authors":null,"canonical_url":null,"category":"mediaAndCommunications","community_id":"d0965544-4413-4b89-aedb-36ae2153c1ac","created_at":1730394736,"current_feed_url":null,"description":"Television Studies Blog","doi_as_guid":false,"favicon":"https://rogue-scholar.org/api/communities/d0965544-4413-4b89-aedb-36ae2153c1ac/logo","feed_format":"application/atom+xml","feed_url":"https://cstonline.net/feed/atom/","filter":null,"funding":null,"generator":"WordPress","generator_raw":"WordPress 6.7.1","home_page_url":"https://cstonline.net/","id":"3e29853c-05ee-479f-aa7d-867ff6dce1e9","indexed":true,"issn":null,"language":"en","license":"https://creativecommons.org/licenses/by/4.0/legalcode","mastodon":null,"prefix":"10.59350","registered_at":0,"relative_url":null,"ror":null,"secure":true,"slug":"cstonline","status":"active","subfield":"3315","subfield_validated":null,"title":"CST Online","updated_at":1775288937.968264,"use_api":true,"use_mastodon":false,"user_id":"80307be4-0a5d-4378-a38f-91852e38c1d8"},"blog_name":"CST Online","blog_slug":"cstonline","content_html":"<p style=\"font-weight: 400;\">I am writing this blog with a heavy heart.\u00a0 After 21 years and 2,000 blogs I have taken the decision to \u2018rest\u2019 the website after Easter.\u00a0 My reasons are varied.\u00a0 Since we started this iteration of CSTonline, with my gripe about <a href=\"https://cstonline.net/sky-exclusivity-weve-been-here-before-by-kim-akass/\">Sky Exclusivity </a>and John Ellis\u2019s <a href=\"https://cstonline.net/letter-from-america-by-john-ellis-3/\">letter from America</a>, we have had a steady stream of blogs.\u00a0\u00a0 Some weeks we were inundated and other weeks not so, but we have always received something from someone.</p>\n<p style=\"font-weight: 400;\">The idea of the website was to provide a public, open access forum, for the dissemination of writing about TV, reports from funded projects and just general \u2018this is what I saw this week\u2019.\u00a0 We always said that TV demanded instant responses, we couldn\u2019t always wait for publishers to print our thoughts \u2013 the promise of the internet meant that we could receive a blog and have it out there for reading within a week.\u00a0 Heady days.</p>\n<p style=\"font-weight: 400;\">The problem is that, over the past few years, Higher Education has been undergoing some pretty seismic changes.\u00a0 Redundancies (voluntary or otherwise), lack of funding, heavier workloads for remaining staff and increased demands from students have meant that everyone has less and less time to devote to writing that doesn\u2019t bring some kind of institutional reward.\u00a0 It makes sense that, in this case, with families to attend, books to write and students to teach, coupled with the demands of REF (or the tenure track) and a general sense of overwhelm has resulted in no blogs.</p>\n<p style=\"font-weight: 400;\">Thanks to stalwart bloggers, and a team of committed volunteers, we have managed to keep the website alive but, it has become clear that something has to change.\u00a0 Podcasts are the new (old) blogs and, despite our attempts to keep everyone interested, it is time to admit that we can no longer proceed without regular content.</p>\n<p style=\"font-weight: 400;\">We <a href=\"https://cstonline.net/cst-online-relaunch-by-kim-akass/\">re-launched CSTonline</a> in its present state on 19 February 2011.\u00a0 Early days were exciting and busy.\u00a0 My re-launch blog announced that \u2018We are retaining David Lavery\u2019s column <em>Telegenic</em>, with his insightful and humorous look at all things televisual.\u00a0\u00a0<em>In Primetime</em>\u00a0stays and so do the regularly updated sections \u2013 Calls For Papers, upcoming conferences, workshops and study days (listed monthly), postgraduate funding the (very) occasional job vacancy and my favourite TV story of the week (or sometimes day) complete with moving pictures.\u2019</p>\n<p style=\"font-weight: 400;\">Even someone as prolific as David Lavery, however, found it difficult to keep up with blogging demands and called \u2018Telegenic\u2019 quits after his blog on <em><a href=\"https://cstonline.net/the-state-of-the-american-sitcom-v-modern-family-by-david-lavery/\">Modern Family</a></em>.\u00a0 He <a href=\"https://cstonline.net/?s=Lavery\">continued to blog for us</a> until he sadly died on 30 August 2016.\u00a0 <a href=\"https://cstonline.net/?s=Pixley\">Andrew Pixley</a> has been one of our more prolific bloggers as has <a href=\"https://cstonline.net/?s=Beattie\">Melissa Beattie</a>.\u00a0 I have <a href=\"https://cstonline.net/?s=Akass\">written a few over the years</a> as has the aforementioned <a href=\"https://cstonline.net/?s=Ellis\">John Ellis</a>.\u00a0 <a href=\"https://cstonline.net/?s=Weissmann\">Elke Weissmann</a> has been prolific as well as editing and managing ECREA\u2019s contributions (for which I am grateful). \u00a0We have featured blogs from all over the world about subjects relevant to TV from Public Service Broadcasting to commercial dramas, streaming, cable, networks, social media \u2026 the list goes on.</p>\n<p style=\"font-weight: 400;\">I am sure that the community has much more to say about the state of television.\u00a0 Streaming has up-ended the industry, as has the introduction of AI, the writer\u2019s strikes and the continued (and continual) attack on the BBC. There is always something to say but, unfortunately, not always the time to say it.</p>\n<p style=\"font-weight: 400;\">I continue to be passionate about TV, I love watching, reading about and writing about television.\u00a0 I am sure there are people out there that want to blog, and we will always publish if someone wants to submit something.\u00a0 However, I reluctantly admit that, if I can\u2019t find the time to write a blog, why should I expect others to?</p>\n<p style=\"font-weight: 400;\">I am so very grateful for the amazing support I have had over the years.\u00a0 Debra Ramsay, Lisa Kelly, Sarah Lahm and Ben Keightly have served faithfully (if I have forgotten someone I apologise).\u00a0 I have received institutional support from Royal Holloway and the University of Hertfordshire.\u00a0 The editorial board at <em>Critical Studies in Television</em> have been amazing.\u00a0 This website would never have got off the ground without mediacitizens who freely gave of designers and web hosting.\u00a0 My most grateful thanks go to Tobias Steiner who continues to work hard on the back end of the website.\u00a0 All of this time and hard work has been freely and generously given.</p>\n<p style=\"font-weight: 400;\">The website will remain online \u2013 there is a wealth of television history contained in its massive archive and I do hope you will continue to read and engage with it.</p>\n<p style=\"font-weight: 400;\">But, until the next iteration of the website, we are reluctantly calling time on this endeavour.</p>\n<div style=\"width: 480px;\" class=\"wp-video\"><video class=\"wp-video-shortcode\" id=\"video-15775-1\" width=\"480\" height=\"360\" preload=\"metadata\" controls=\"controls\"><source type=\"video/mp4\" src=\"https://cstonline.net/wp-content/uploads/2026/04/YTDown.com_YouTube_Bugs-Bunny-That-s-All-Folks_Media_HeERupuicHE_001_360p.mp4?_=1\" /><a href=\"https://cstonline.net/wp-content/uploads/2026/04/YTDown.com_YouTube_Bugs-Bunny-That-s-All-Folks_Media_HeERupuicHE_001_360p.mp4\">https://cstonline.net/wp-content/uploads/2026/04/YTDown.com_YouTube_Bugs-Bunny-That-s-All-Folks_Media_HeERupuicHE_001_360p.mp4</a></video></div>\n","doi":"https://doi.org/10.59350/149p8-3jh82","funding_references":null,"guid":"https://cstonline.net/?p=15775","id":"37b623ec-0fd6-45c1-b384-536b7142f175","image":"https://cstonline.net/wp-content/uploads/2026/04/Past-Future-image-2021-1024x421-1.jpg","indexed":true,"indexed_at":1775205403,"language":"en","parent_doi":null,"published_at":1775203941,"reference":[],"registered_at":0,"relationships":[],"rid":"c3h28-yep51","status":"active","summary":"I am writing this blog with a heavy heart.\u00a0 After 21 years and 2,000 blogs I have taken the decision to \u2018rest\u2019 the website after Easter.\u00a0 My reasons are varied.\u00a0 Since we started this iteration of CSTonline, with my gripe about Sky Exclusivity and John Ellis\u2019s letter from America, we have had a steady stream of blogs.","tags":["Blogs"],"title":"CSTonline by Kim Akass","updated_at":1775204127,"url":"https://cstonline.net/cstonline-by-kim-akass/","version":"v1"}},{"document":{"abstract":"2 days with up to 100+ papers in 30+ panels, 4 keynote events, lunches and refreshment breaks for both days, optional self-funded conference meal, student rates (and lottery free spaces) and campus accommodation available \u2013 Talbot Campus \u2013 Bournemouth University DEADLINE FOR SUBMISSION 3 May 2026 The Centre for the Study of Conflict, Emotion and [\u2026]","archive_url":null,"authors":[{"contributor_roles":[],"family":"Akass","given":"Kim"}],"blog":{"archive_collection":null,"archive_host":null,"archive_prefix":null,"archive_timestamps":null,"authors":null,"canonical_url":null,"category":"mediaAndCommunications","community_id":"d0965544-4413-4b89-aedb-36ae2153c1ac","created_at":1730394736,"current_feed_url":null,"description":"Television Studies Blog","doi_as_guid":false,"favicon":"https://rogue-scholar.org/api/communities/d0965544-4413-4b89-aedb-36ae2153c1ac/logo","feed_format":"application/atom+xml","feed_url":"https://cstonline.net/feed/atom/","filter":null,"funding":null,"generator":"WordPress","generator_raw":"WordPress 6.7.1","home_page_url":"https://cstonline.net/","id":"3e29853c-05ee-479f-aa7d-867ff6dce1e9","indexed":true,"issn":null,"language":"en","license":"https://creativecommons.org/licenses/by/4.0/legalcode","mastodon":null,"prefix":"10.59350","registered_at":0,"relative_url":null,"ror":null,"secure":true,"slug":"cstonline","status":"active","subfield":"3315","subfield_validated":null,"title":"CST Online","updated_at":1775288937.968264,"use_api":true,"use_mastodon":false,"user_id":"80307be4-0a5d-4378-a38f-91852e38c1d8"},"blog_name":"CST Online","blog_slug":"cstonline","content_html":"<div><b>2 days with up to 100+ papers in 30+ panels, 4 keynote events, lunches and refreshment </b><strong>breaks for both days, optional self-funded conference meal, student rates (and lottery free spaces) and campus accommodation available \u2013 </strong><a href=\"https://www.bournemouth.ac.uk/why-bu/facilities-campuses/talbot-campus\"><strong>Talbot Campus \u2013 Bournemouth University</strong></a></div>\n<p style=\"font-weight: 400;\"><strong>DEADLINE FOR SUBMISSION 3 May 2026</strong></p>\n<p style=\"font-weight: 400;\"><a href=\"https://www.bournemouth.ac.uk/research/centres-institutes/centre-study-conflict-emotion-social-justice\">The Centre for the Study of Conflict, Emotion and Social Justice</a>, in the Faculty of Media, Science and Technology at Bournemouth University invites scholarly and practice-based proposals for an in-person conference on media and emotion.</p>\n<p style=\"font-weight: 400;\">As neuroscientist Raymond J. Dolan observes, \u201cemotion provides the principal currency in human relationships as well as the motivational force for what is best and worst in human behaviour\u201d (2002). Within contemporary media production and consumption, emotion often binds us together, at times appearing as a language of intimacy, vulnerability and reflexivity, and at times appearing as a language of division, entitlement and exclusion. Therefore, emotions expressed and evoked through media have attracted sustained scholarly attention across a wide range of disciplines, spanning the humanities, the social sciences, and the natural sciences.</p>\n<p style=\"font-weight: 400;\">Notably, in the era of populism, political leaders deploy emotionally charged narratives, in offering simple answers to complex problems, often with minority groups as the targets of division and abjection.\u00a0Also, techniques of production and representation deploy the language of emotion, in aesthetic and narrative-oriented contexts, and theoretical work is constantly evolving.</p>\n<p style=\"font-weight: 400;\">As Laura U. Marks discussed in her landmark text <em>The Skin of Film</em> (1999), contemporary media offers a creative space for issues of touch, memory and hegemonic challenge, invigorated through a media-based emotional landscape. At the same time Sara Ahmed has theorised in <em>The Cultural Politics of Emotion</em> (2014), that \u2018affective economies\u2019 and \u2018sticky associations\u2019 shape our phenomenological landscapes, defining boundaries for minority voices as much as offering spaces for resistance and reinvention.</p>\n<p style=\"font-weight: 400;\">We invite scholars from any related disciplines and industry practitioners to participate in this conference and share critical perspectives on media and emotion, drawing on their theoretical models, research trajectories or practice-based environments. Our keynote speakers, Kristyn Gorton, Kim Akass and Lisa Blackman, and our Industry keynote panel led by Christa van Raalte (see below), will offer insights into media affects and their intersection with scholarly and practice-based approaches.</p>\n<p style=\"font-weight: 400;\"><strong>AREAS OF INQUIRY (not exhaustive)</strong></p>\n<table style=\"font-weight: 400;\" width=\"662\">\n<tbody>\n<tr>\n<td width=\"662\">\u25cf\u00a0\u00a0\u00a0\u00a0\u00a0 <strong>Emotional states</strong>, such as anger, anomie, confusion, compulsion, contempt, disgust, dissociation, fear, happiness, indifference, joy, longing, nihilism, rage, regret, shame, surprise.</td>\n</tr>\n<tr>\n<td width=\"662\">\u25cf\u00a0\u00a0\u00a0\u00a0\u00a0 <strong>Practice oriented contexts</strong>, such as broadcasting, cinematography, directing, distribution, drama, documentary, editing, journalism, liveness, marketing, streaming, social media, touchscreen technology, workplace.</td>\n</tr>\n<tr>\n<td width=\"662\">\u25cf\u00a0\u00a0\u00a0\u00a0\u00a0 <strong>Political and social worlds</strong>, such as Brexit, Covid-19, citizenship, community, Gaza, disability, ethnicity, inclusivity, nationality, neoliberalism, race, religion, Sudan, Thatcherism, Trump, Ukraine.</p>\n<p>\u25cf\u00a0\u00a0\u00a0\u00a0\u00a0 <strong>Theoretical models</strong>, relating to concepts, such as affect, alienation, behaviour, cognition, community, colonialism, consumption, embodiment, gender, genre, identity, inclusivity, memory, minority, nostalgia, orientalism, otherness, pastiche, post-colonialism, phenomenology, reasoning, regulation, representation, sexuality, surrealism, social realism, trauma.</td>\n</tr>\n</tbody>\n</table>\n<p style=\"font-weight: 400;\"><strong>SUBMIT YOUR PROPOSALS:</strong></p>\n<p style=\"font-weight: 400;\">Please submit abstract proposals of 250 words (max) by the 3 May 2026, using the appropriate links below (as single paper or pre-formed panel):</p>\n<p style=\"font-weight: 400;\"><a href=\"https://forms.office.com/Pages/ResponsePage.aspx?id=VZbi7ZfQ5EK7tfONQn-_uKTV25ijuANLi5dE2tVQ245UQTlTMVo3WjIxOU44MzVRQldYV0hYNUdXTS4u\">Media and Emotion Conference September 2026: SINGLE PAPER PROPOSAL\u00a0\u00a0 \u2013 Fill out form</a></p>\n<p style=\"font-weight: 400;\"><a href=\"https://forms.office.com/Pages/ResponsePage.aspx?id=VZbi7ZfQ5EK7tfONQn-_uKTV25ijuANLi5dE2tVQ245UQjBBMzcxWFVDUDRJMzhaU1dLTVFRWDRXSy4u\">Media and Emotion Conference September 2026: PRE-FORMED PANEL PROPOSAL \u2013 Fill out form</a></p>\n<p style=\"font-weight: 400;\">Decisions will be announced after 15<sup>th</sup> May 2026</p>\n<p style=\"font-weight: 400;\"><strong>NB:</strong> This conference is an in-person event only, with no facility for hybrid presentations.</p>\n<p style=\"font-weight: 400;\"><strong>STUDENTS:</strong></p>\n<p style=\"font-weight: 400;\">We will also offer<strong> post</strong><strong>graduate researchers</strong> the opportunity to enter a lottery to win a <strong>registration fee waiver</strong> (with five spaces available).</p>\n<p style=\"font-weight: 400;\"><strong>REGISTRATION &amp; ACCOMMODATION</strong></p>\n<p style=\"font-weight: 400;\"><strong>Registration fee: </strong>including refreshments and lunch for two days:</p>\n<p style=\"font-weight: 400;\">\u00a3140 (students, part time employment)</p>\n<p style=\"font-weight: 400;\">\u00a3170 (full time employment)</p>\n<p style=\"font-weight: 400;\"><strong>Conference evening</strong> meal will be available under a separate invitation, at own cost.</p>\n<p style=\"font-weight: 400;\"><strong>On site campus accommodation </strong>will be available at \u00a375 for three nights (fixed price), plus \u00a325 for each additional night (over the preceding weekend)</p>\n<p style=\"font-weight: 400;\"><strong>Local hotels available</strong> at reduced conference rates.</p>\n<p style=\"font-weight: 400;\"><strong>CONFIRMED KEYNOTES: </strong><strong>\u00a0</strong></p>\n<p style=\"font-weight: 400;\"><a href=\"https://www.gold.ac.uk/media-communications/staff/blackman/\"><strong>Lisa Blackman </strong>(Professor in Media and Communications \u2013 Goldsmiths University)</a> &#8211; whose work includes:</p>\n<ul>\n<li><em>Grey Media: A Psychopolitics of Deception</em> (Punctum Books 2026).</li>\n<li><em>Haunted Data: Affect, Transmedia, Weird Science</em> (Bloomsbury 2019).</li>\n</ul>\n<p style=\"font-weight: 400;\"><strong>DECEIT AND DECEPTION:</strong> Lisa will explore media and emotion through the concept of \u2018grey media\u2019, a term which brings into alignment the long histories of apparatuses of deceit and deception which have a distinct mediality, linking the gaslighting of emotional abuse, information warfare and AI Deception.</p>\n<p style=\"font-weight: 400;\"><a href=\"https://ahc.leeds.ac.uk/arts-humanities-cultures/staff/2910/professor-kristyn-gorton\"><strong>Kristyn Gorton (Professor of Film and Television \u2013 University of Leeds)</strong></a> \u00a0-\u2013 whose work includes:</p>\n<ul>\n<li><em>Emotion Online: Theorising Affect on the Internet</em> (Palgrave 2013).</li>\n<li><em>Media Audiences: Television, Meaning and Emotion</em> (Edinburgh University Press, 2009).</li>\n</ul>\n<p style=\"font-weight: 400;\"><strong>EMPATHY AND INTIMACY:</strong>\u00a0 This paper returns to Kristyn\u2019s earlier work (as above) and engages with recent work on &#8217;empathy&#8217; and &#8216;intimacy&#8217; to reflect on the development of the field and the ways in which television constructs emotion. Kristyn will draw on examples from serial melodrama which use excess to mark out spaces for viewers to work through narratives of social justice and change. The paper will also consider how the production cultures impact and inform the affective landscape of these stories.</p>\n<p style=\"font-weight: 400;\"><strong>Kim Akass</strong> (Professor of Radio Television and Film) &#8211; whose work includes:</p>\n<ul>\n<li><em>Mothers on American Television: From Here to Maternity</em> (Manchester University Press 2023).</li>\n</ul>\n<p style=\"font-weight: 400;\"><strong>RAGE AND MOTHERHOOD</strong>: Since the overturn of Roe vs Wade in June 2022 and the resulting ban on abortion in 13 states (so far), is it surprising that we are seeing so much female rage on our screens? From postpartum psychosis in <em>Die My Love</em> (Lynne Ramsay, 2025) to <em>If I Had Legs, I Would Kick You</em> (Mary Bronstein, 2025) maternal rage is, well, all the rage. In this paper Kim will explore how female rage has emerged as a theme in film and TV and asks whether this is due to an increase in women behind the scenes or a reaction to punitive legislation against women\u2019s reproductive rights.</p>\n<p style=\"font-weight: 400;\"><a href=\"https://staffprofiles.bournemouth.ac.uk/display/cvanraalte\"><strong>Christa van Raalte</strong> (Associate Professor of Film and Television \u2013 Bournemouth University)</a> \u2013 whose work includes:</p>\n<ul>\n<li>The Good Manager in TV: Tales for the Twenty-first Century, in <em>Creative Industries Journal </em>(2024), (with Wallis, R.).</li>\n<li>More Than Just a Few \u2018Bad Apples\u2019: The Need for a Risk Management Approach to the Problem of Workplace Bullying in the UK\u2019s Television Industry, in <em>Creative Industries Journal </em>(2023), (with Wallis, R. and Pekalski, D.).</li>\n</ul>\n<p style=\"font-weight: 400;\"><strong>TV INDUSTRY PANEL: THE ECONOMICS OF EMOTION</strong>:\u00a0 Christa will also bring together a range of industry practitioners, considering how emotion works as a commodity for creativity, in artistic and workplace contexts. What are the safeguarding standards when creators, collaborators and audiences engage with productions that frame emotional media? How might media producers negotiate the polarising emotional landscape and ethical broadcasting standards when creating content?</p>\n<p style=\"font-weight: 400;\"><strong>We are looking forward to your submissions!!</strong></p>\n<p style=\"font-weight: 400;\"><strong>Conference organisers:</strong> Christopher Pullen, Catalin Brylla &amp; Savvas Voutyras of</p>\n<p style=\"font-weight: 400;\"><a href=\"https://www.bournemouth.ac.uk/research/centres-institutes/centre-study-conflict-emotion-social-justice\">The Centre for the Study of Conflict, Emotion and Social Justice</a></p>\n<p style=\"font-weight: 400;\">Bournemouth University, Faculty of Media, Science and Technology, Talbot Campus, Fern Barrow Poole, BH12 5BB.</p>\n<p style=\"font-weight: 400;\"><strong>Conference email contact: </strong><a href=\"mailto:cpullen@bournemouth.ac.uk\">cpullen@bournemouth.ac.uk</a></p>\n","doi":"https://doi.org/10.59350/zmmp8-n8w87","funding_references":null,"guid":"https://cstonline.net/?p=15784","id":"9895a0b3-b02a-44f4-b87b-fa8655fb8712","image":"https://cstonline.net/wp-content/uploads/2026/04/1773843427481.jpeg","indexed":true,"indexed_at":1775205402,"language":"en","parent_doi":null,"published_at":1775203256,"reference":[],"registered_at":0,"relationships":[],"rid":"64rbw-1zn97","status":"active","summary":"<b>\n 2 days with up to 100+ papers in 30+ panels, 4 keynote events, lunches and refreshment\n</b>\n<strong>\n breaks for both days, optional self-funded conference meal, student rates (and lottery free spaces) and campus accommodation available \u2013\n</strong>\n<strong>\n Talbot Campus \u2013 Bournemouth University\n</strong>\n<strong>\n DEADLINE FOR SUBMISSION 3 May 2026\n</strong>\nThe Centre for the Study of Conflict, Emotion and Social Justice, in the Faculty of Media,","tags":["CFPs","CFPs Conferences"],"title":"CFP: MEDIA AND EMOTION CONFERENCE \u2013 7-8 SEPTEMBER 2026","updated_at":1775203966,"url":"https://cstonline.net/cfp-media-and-emotion-conference-7-8-september-2026/","version":"v1"}},{"document":{"abstract":"In my Day 1 article, I wrote that the OECD Digital Education Outlook 2026 conference documented performance gains alongside learning losses, efficiency alongside declining human competence, and the emergence of what Dragan Gasevic called \u201cmetacognitive laziness.\u201d I described a day that did not offer comfort.","archive_url":null,"authors":[{"affiliation":[{"id":"https://ror.org/04h13ss13","name":"The Geneva Learning Foundation"}],"contributor_roles":[],"family":"Sadki","given":"Reda","url":"https://orcid.org/0000-0003-4051-0606"}],"blog":{"archive_collection":null,"archive_host":null,"archive_prefix":null,"archive_timestamps":null,"authors":null,"canonical_url":null,"category":"educationalSciences","community_id":"7e26491f-41c6-4665-9088-5aa6643a1ba8","created_at":1731211871,"current_feed_url":null,"description":"Learning to make a difference","doi_as_guid":false,"favicon":null,"feed_format":"application/atom+xml","feed_url":"https://redasadki.me/feed/atom/","filter":null,"funding":null,"generator":"WordPress","generator_raw":"WordPress 6.7.1","home_page_url":"https://redasadki.me","id":"88b8caba-b485-4654-96ce-a21547abaab3","indexed":true,"issn":null,"language":"en","license":"https://creativecommons.org/licenses/by/4.0/legalcode","mastodon":"https://techhub.social/@redasadki","prefix":"10.59350","registered_at":0,"relative_url":null,"ror":null,"secure":true,"slug":"redasadki","status":"active","subfield":"3304","subfield_validated":null,"title":"Reda Sadki","updated_at":1775289073.567933,"use_api":true,"use_mastodon":false,"user_id":"0d34dfde-a007-4ec9-9bc6-7b0318fa2c5e"},"blog_name":"Reda Sadki","blog_slug":"redasadki","content_html":"<div class='__iawmlf-post-loop-links' style='display:none;' data-iawmlf-post-links='[{&quot;id&quot;:791,&quot;href&quot;:&quot;https:\\/\\/doi.org\\/10.59350\\/1bqm0-1d126&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;https:\\/\\/redasadki.me\\/2026\\/03\\/24\\/oecd-digital-education-outlook-2026-how-can-ai-help-human-beings-learn-and-grow\\/&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:792,&quot;href&quot;:&quot;https:\\/\\/ailiteracyframework.org&quot;,&quot;archived_href&quot;:&quot;http:\\/\\/web-wp.archive.org\\/web\\/20260307044252\\/https:\\/\\/ailiteracyframework.org\\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2026-04-03 07:58:38&quot;,&quot;http_code&quot;:200}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-04-03 07:58:38&quot;,&quot;http_code&quot;:200},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:793,&quot;href&quot;:&quot;https:\\/\\/www.science.org\\/doi\\/10.1126\\/science.adw3000&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;pending&quot;},{&quot;id&quot;:794,&quot;href&quot;:&quot;https:\\/\\/doi.org\\/10.1007\\/978-3-031-36336-8_118&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;pending&quot;},{&quot;id&quot;:795,&quot;href&quot;:&quot;https:\\/\\/doi.org\\/10.1126\\/science.adw3000&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;pending&quot;},{&quot;id&quot;:786,&quot;href&quot;:&quot;https:\\/\\/doi.org\\/10.1787\\/062a7394-en&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:671,&quot;href&quot;:&quot;https:\\/\\/doi.org\\/10.59350\\/859ed-e8148&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;https:\\/\\/redasadki.me\\/2025\\/10\\/14\\/the-great-unlearning-notes-on-the-empower-learners-for-the-age-of-ai-conference\\/&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:54,&quot;href&quot;:&quot;https:\\/\\/doi.org\\/10.59350\\/w1ydf-gd85&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;https:\\/\\/redasadki.me\\/2025\\/03\\/09\\/artificial-intelligence-accountability-and-authenticity-knowledge-production-and-power-in-global-health-crisis\\/&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:697,&quot;href&quot;:&quot;https:\\/\\/doi.org\\/10.59350\\/redasadki.20995&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;https:\\/\\/redasadki.me\\/2025\\/06\\/17\\/when-funding-shrinks-impact-must-grow-the-economic-case-for-peer-learning-networks\\/&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:28,&quot;href&quot;:&quot;https:\\/\\/doi.org\\/10.59350\\/redasadki.21123&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;https:\\/\\/redasadki.me\\/2025\\/07\\/16\\/why-peer-learning-is-critical-to-survive-the-age-of-artificial-intelligence\\/&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:796,&quot;href&quot;:&quot;https:\\/\\/doi.org\\/10.59350\\/6rjnm-1rd08&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;https:\\/\\/redasadki.me\\/2026\\/03\\/13\\/introducing-claude-cardot-our-first-ai-co-worker-to-support-frontline-health-and-humanitarian-leaders\\/&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;pending&quot;}]'></div>\n<p id=\"h-in-my-day-1-article-i-wrote-that-the-oecd-digital-education-outlook-2026-conference-documented-performance-gains-alongside-learning-losses-efficiency-alongside-declining-human-competence-and-the-emergence-of-what-dragan-gasevic-called-metacognitive-laziness-i-described-a-day-that-did-not-offer-comfort\">In my <a href=\"https://doi.org/10.59350/1bqm0-1d126\">Day 1 article</a>, I wrote that the OECD Digital Education Outlook 2026 conference documented performance gains alongside learning losses, efficiency alongside declining human competence, and the emergence of what Dragan Gasevic called \u201cmetacognitive laziness.\u201d I described a day that did not offer comfort.</p>\n\n\n\n<p>Where the first day established the tension between performance and learning, the second day forced the question of what to do about it. Nine sessions brought practitioners, researchers, young people, AI companies, and policymakers face to face with the growing evidence that generative AI in education is producing a widening gap between what students can do with AI and what they understand without it. The most striking contribution came not from a professor or a minister but from Beatriz Moutinho, a young woman from Cabo Verde, who said: \u201cI am very worried about AI replacing young people in the job market. But I am even more worried about young people preemptively replacing themselves.\u201d</p>\n\n\n\n<p>That sentence reframed the entire day: what happens when people become indistinguishable from the AI itself?</p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-self-replacement-risk-young-people-see-what-adults-are-slow-to-name\">Self-replacement risk: Young people see what adults are slow to name</h2>\n\n\n\n<p>Beatriz Moutinho, moderating and speaking in the youth session, articulated risks that the research sessions had danced around. She described an escalation pattern: students begin by using AI for discrete tasks, progress to using it for structuring their thinking, and eventually use it to form opinions and make personal decisions. \u201cWe are giving our first drafts of our first thoughts in our brain directly to AI before even fully structuring them,\u201d she said.</p>\n\n\n\n<p>Her concept of \u201cself-replacement\u201d was the most original intellectual contribution of the day. It is not that AI will take young people\u2019s jobs. It is that young people will preemptively delegate the formation of their own professional voice to AI, producing homogenised output that makes them indistinguishable from the machine. \u201cThis loss of differentiation might be something to look out for,\u201d Moutinho said, \u201cespecially in the job market.\u201d</p>\n\n\n\n<p>She also identified what she called a \u201cflipped AI divide\u201d: wealthier students retain access to human support while lower-income students become increasingly reliant on AI alone. This inverts the optimistic narrative of AI as an equaliser.</p>\n\n\n\n<p>Elisa Lorenzini, a student from Italy, and Kenji Inoue, a student from Japan, both reported that their schools had provided no formal AI literacy instruction. Lorenzini said her teachers prohibited AI because they did not understand it. \u201cIt would be useful if teachers knew how to use it,\u201d she said, \u201cbecause maybe they can understand why it is a useful tool even for students.\u201d</p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-the-performance-learning-gap-deepens\">The performance-learning gap deepens</h2>\n\n\n\n<p>The central finding of the OECD Digital Education Outlook 2026, presented as a keynote by lead editor Stephan Vincent-Lancrin, is blunt. General-purpose generative AI tools reliably improve short-term task performance but do not reliably produce learning gains. The mechanism is metacognitive laziness: when AI produces fluent, confident output, learners stop monitoring their own thinking.</p>\n\n\n\n<p>Vincent-Lancrin reported that high school and vocational students in several countries approach 80 percent usage rates for generative AI. He described a study in which students using ChatGPT for homework scored zero additional points on a subsequent knowledge test. \u201cOur traditional education model assumes that if we perform better, then that means we have the knowledge and skills,\u201d he said. \u201cWhich is very problematic.\u201d</p>\n\n\n\n<p>Dragan Gasevic, presenting in the assessment session, provided the sharpest experimental evidence. A randomised controlled trial lasting nearly a full semester with medical students showed that those given immediate AI access performed no better than the AI working alone. Only students who developed their clinical reasoning skills before AI was introduced achieved genuine human-AI synergy. \u201cHybrid intelligence is not that you just automate a task to AI,\u201d Gasevic said. \u201cIf your ability is completely automated, that means you are obsolete as well yourself.\u201d</p>\n\n\n\n<p>Inge Molenaar of Radboud University explained the mechanism. The fluency of AI output suppresses the metacognitive cues that normally trigger critical evaluation. \u201cThe metacognitive cues that generative AI responses give to humans do not allow us to engage or do not trigger us to engage in critical evaluation and in learning activities,\u201d she said. \u201cIt increases the chance of accepting it and moving backwards.\u201d </p>\n\n\n\n<p>The zone of proximal development collapses: AI output is often beyond what a student can process, and instead of scaffolding learning, it replaces it.</p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-practitioners-redesign-everything-from-scratch\">Practitioners redesign everything from scratch</h2>\n\n\n\n<p>If <a href=\"https://redasadki.me/2026/03/24/oecd-digital-education-outlook-2026-how-can-ai-help-human-beings-learn-and-grow/\" type=\"post\" id=\"23232\">Day 1 established the theory</a>, Day 2 showed the practice. The opening session brought teachers from Iceland, England, and India who are living with AI in their classrooms every day.</p>\n\n\n\n<p>Frida Gylfadottir and Tinna Osp Arnardottir, from a secondary school in Gardabae, Iceland, described a national pilot involving 255 teachers across 31 schools. They have redesigned assessment so that written essays count for only 20 percent of the grade, with oral draft interviews and oral defences making up the rest. \u201cIf they have not written the essay, if the text is written by AI, it is really difficult for them to point out where the thesis statement is located or the topic sentences,\u201d Gylfadottir said. \u201cThey cannot fake it.\u201d</p>\n\n\n\n<p>Christian Turton of the Chiltern Learning Trust in England was equally direct. \u201cEvery assignment and every test, every task we used to rely on has to be rethrown from scratch,\u201d he said. Turton introduced the concept of \u201cdigital metacognition,\u201d thinking about where the thinking happens when using AI. He also reported that his trust trialled AI marking tools and found the error rate unacceptable.</p>\n\n\n\n<p>Souptik Pal of the Learning Links Foundation in India described classrooms of 100 students where differentiation without AI is nearly impossible. After two-day teacher training sessions, the majority of trained teachers began using AI for daily lesson planning. But Pal emphasised that the biggest barrier is not technical. It is attitudinal. \u201cThe most important challenge is coming with the mindset that AI will replace the teachers,\u201d he said.</p>\n\n\n\n<p>Gylfadottir captured a practitioner reality in one sentence: \u201cThe truth is right now we are spending more time, not less.\u201d</p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-bricolage-assessment-must-change-but-the-evidence-base-is-dangerously-thin\">Bricolage: assessment must change, but the evidence base is dangerously thin</h2>\n\n\n\n<p>Ryan Baker proposed \u201cinvigilation on an audit basis\u201d as one way forward. Let students use AI to produce artefacts, but periodically ask them to explain their work without the technology present. \u201cIf they cannot talk about it, then they do not really understand it,\u201d he said. Nikol Rummel described a collaborative approach in which students using different AI prompts must reconcile divergent outputs, creating what she called the \u201cIKEA effect,\u201d ownership through effortful engagement <em>bricolage</em>.</p>\n\n\n\n<p>Gasevic pushed further, arguing for two parallel assessment streams: one measuring standalone human skills, and another measuring human-AI synergy. He reported that LLM-based analysis of process data, including chat logs and keystroke patterns, already achieves approximately 80 percent of expert-quality results, making scalable process assessment technically feasible.</p>\n\n\n\n<p>But behind these proposals sits an uncomfortable truth that Isabelle Hau of the <a href=\"http://stanford accelerator for learning\">Stanford Accelerator for Learning</a> made explicit in the safety session. Her systematic review found only 22 causal-quality studies on AI and learning. No longitudinal data exist. \u201cWe are currently running a massive uncontrolled experiment on our children,\u201d said Stephie Herlin of KORA, \u201cand you cannot improve what you do not measure.\u201d KORA has benchmarked more than 30 AI models. Closed-source models average 49 percent on child safety scores. Open-source models average 25 percent. Seven models score zero.</p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-ai-literacy-as-everyone-s-responsibility-means-it-is-nobody-s-responsibility\">AI literacy as everyone\u2019s responsibility means it is nobody\u2019s responsibility</h2>\n\n\n\n<p>The AI literacy session, moderated by Laura Lindberg of European Schoolnet, revealed a paradox that Daniela Hau of Luxembourg\u2019s Ministry of Education stated plainly: \u201cIf we say everybody, we risk saying nobody.\u201d</p>\n\n\n\n<p>The <a href=\"https://ailiteracyframework.org\">EC-OECD AI Literacy Framework</a> defines 22 competences across four domains. Mario Piacentini of the OECD described how this framework will be translated into a PISA 2029 assessment. Simona Petkova of the European Commission reported that young people in Europe are twice as likely to use generative AI as the general population, yet three out of four teachers do not feel well prepared to address AI in the classroom. Teachers are estimated to be more exposed to AI than 90 percent of workers across the EU.</p>\n\n\n\n<p>The most significant empirical contribution came from Lixiang Yan of Tsinghua University, who presented a national study of nearly 2.4 million Chinese vocational students. Yan found that institutional AI readiness only improves student AI literacy when it runs through teachers who have developed genuine instructional competence with AI. \u201cThe teacher is the indispensable engine in this transformation,\u201d Yan said. General attitudinal acceptance is not enough. The system must build collective instructional capability.</p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-ai-in-research-is-already-everywhere-and-the-risks-mirror-education\">AI in research is already everywhere, and the risks mirror education</h2>\n\n\n\n<p>Dominique Guellec of the University of Strasbourg documented the penetration of AI in scientific research: from 2 percent of publications in 2015 to 8 percent in 2022, and approaching two-thirds of all researchers using AI by 2025. He described AI as no longer a tool but part of the infrastructure of doing research. \u201cThere is a risk on the human side to over-rely on AI, especially when it does the writing for you,\u201d Guellec said. \u201cWriting is also a part of thinking.\u201d</p>\n\n\n\n<p>In a moment that captured the pace of change more vividly than any statistic, Guellec acknowledged on stage that sections of his own OECD Digital Education Outlook 2026 chapter were already outdated. \u201cWhat I put in the slide, which is that AI does not yet do research-level mathematics, is already outdated,\u201d he said.</p>\n\n\n\n<p>Yuko Harayama of the Global Partnership on AI argued that the researcher\u2019s identity needs to shift from generating solutions to evaluating them. \u201cWhat you have to re-explore and re-empower will be the out-of-the-box thinking,\u201d she said, \u201cnot just following and becoming dependent on the output coming from AI.\u201d A <a href=\"https://www.science.org/doi/10.1126/science.adw3000\">study published in Science Magazine</a>, cited in the session, found homogenisation of research topics in the fields most intensive in AI use.</p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-the-equity-question-is-structural-not-peripheral\">The equity question is structural, not peripheral</h2>\n\n\n\n<p>The session on educational GenAI in low and middle-income areas, moderated by Cristobal Cobo of the World Bank, confronted a question that Day 1 raised but did not resolve: will AI close or widen the educational divide?</p>\n\n\n\n<p>Paul Atherton laid out the infrastructure gap. Children in low-income countries are up to 14 times less likely to have internet at home. But Atherton argued that the more fundamental barrier is literacy itself. \u201cIf you cannot read, you cannot access a language model that is done through reading,\u201d he said. The Matthew effect applies: those with the most capability to use AI gain the most.</p>\n\n\n\n<p>Seiji Isotani of the University of Pennsylvania presented the most compelling positive evidence. His <a href=\"https://doi.org/10.1007/978-3-031-36336-8_118\">AIED Unplugged system</a> reached more than 500,000 students across 20,000 schools in Brazil using only teacher mobile phones and printed feedback sheets. No student devices or internet were required. \u201cInstead of putting the burden on governments, we put the burden on people who develop technologies,\u201d Isotani said.</p>\n\n\n\n<p>Maria Florencia Ripani argued that language and culture are not technical parameters. \u201cLanguage is part of a certain culture,\u201d she said. \u201cIt is very important to work with user-centred design and use culturally relevant elements.\u201d She described how models in Lugandan already outperform GPT-3.5 from two years ago, despite substantial performance degradation compared to English.</p>\n\n\n\n<p>Juan-Pablo Giraldo Ospino of UNICEF delivered the most direct challenge: \u201cTeachers cannot be replaced in the education system and cannot be replaced in the way our brain develops, particularly in the early years.\u201d He warned that framing AI as a solution to teacher shortage risks exacerbating burnout, because \u201cif we increase productivity, actually we are going to make teachers work the same hours or more to be able to teach more kids.\u201d</p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-learning-science-points-toward-slow-ai\">Learning science points toward slow AI</h2>\n\n\n\n<p>The final session, on applying learning science with AI, offered the clearest design direction of the day. Ronald Beghetto of Arizona State University introduced the concept of \u201cslow AI,\u201d a deliberate counterpoint to the transactional \u201cfast AI\u201d mode in which users delegate cognitive and creative work entirely. \u201cA lot of people think creativity is just kind of unbridled originality, but really creativity is constrained originality,\u201d he said. His framework asks learners to do the mental work first, then turn to AI as a provocateur or scaffold, then return to human teams.</p>\n\n\n\n<p>Dora Demszky of Stanford presented the first large-scale randomised controlled trial of automated feedback in physical classrooms. Teachers using her TeachFX platform received real-time feedback on their use of focusing questions, and the behaviour increased by 15 to 20 percent. But she also noted a structural problem: \u201cOne of the issues with machine learning systems is that they are trained to say what you want to hear rather than adding the productive friction that is necessary for learning.\u201d Sycophancy in large language models is not a bug. It is a design feature that undermines learning.</p>\n\n\n\n<p>Nikol Rummel and Sebastian Strauss presented a systematic review of GenAI in collaborative learning that found only two experimental studies measuring domain-specific knowledge outcomes. The evidence base for one of the most-discussed applications of AI in education barely exists.</p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-beyond-k-12-what-oecd-digital-education-outlook-s-dialogue-means-for-humanitarian-and-health-systems\">Beyond K-12: what OECD Digital Education Outlook\u2019s dialogue means for humanitarian and health systems</h2>\n\n\n\n<p>The OECD conference focused on schools. But every finding from Day 2 reaches into the world I work in, where health workers and humanitarian practitioners learn from each other across more than 130 countries in the peer learning networks coordinated by The Geneva Learning Foundation.</p>\n\n\n\n<p>The Day 1 article mapped three implications. Day 2 deepened each of them and surfaces new ones.</p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-self-replacement-is-already-happening-in-global-health\">Self-replacement is already happening in global health</h3>\n\n\n\n<p>Moutinho\u2019s concept of self-replacement is not speculative in our context. It describes what I have already observed. In our Teach to Reach programmes, <a href=\"https://redasadki.me/2025/03/09/artificial-intelligence-accountability-and-authenticity-knowledge-production-and-power-in-global-health-crisis/\" type=\"post\" id=\"20803\">highly committed health workers have begun submitting narratives that clearly bear the mark of generative AI</a>. They are not cheating. They are doing what every professional does when a tool appears that can produce faster, more polished output. But the result is a loss of the situated, experiential knowledge that makes their contributions irreplaceable.</p>\n\n\n\n<p>I wrote about this as the \u201ctransparency paradox\u201d in my work on AI, accountability, and authenticity in global health. If a health worker discloses AI use, their work is devalued as inauthentic. If they conceal it, they carry the ethical tension alone. </p>\n\n\n\n<p>Moutinho\u2019s framing adds a dimension I had not fully articulated: the risk is not only institutional but developmental. When practitioners delegate the act of writing about their own experience to AI, they may lose the capacity to recognise what they know that AI does not.</p>\n\n\n\n<p>In crisis contexts, this is not an abstraction. A health worker who cannot articulate the reasoning behind a vaccination micro-plan, because the writing was done by a chatbot and the thinking was never fully formed, is a health worker less able to adapt when the plan meets reality on the ground.</p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-the-evidence-gap-is-wider-in-global-health-than-in-k-12\">The evidence gap is wider in global health than in K-12</h3>\n\n\n\n<p>Isabelle Hau\u2019s finding that only 22 causal-quality studies on AI and learning exist is alarming for education. In global health and humanitarian response, the number is effectively zero. AI tools are being deployed to support health worker training, translate guidance, and even generate response protocols, but I am not aware of a single randomised controlled trial measuring whether these tools produce genuine learning gains among health professionals in low-resource settings.</p>\n\n\n\n<p>Gasevic\u2019s finding that students given immediate AI access performed no better than AI alone has a direct analogue. If a health worker uses a general-purpose chatbot to draft an outbreak response protocol without first developing the clinical reasoning that the protocol requires, the output may be fluent and authoritative while the human understanding behind it is empty. In K-12, this undermines learning. In health systems and in humanitarian response, it can cost lives.</p>\n\n\n\n<p>At The Geneva Learning Foundation, we introduced our first AI co-worker, <a href=\"https://redasadki.me/2026/03/13/introducing-claude-cardot-our-first-ai-co-worker-to-support-frontline-health-and-humanitarian-leaders/\" type=\"post\" id=\"23130\">Claude Cardot</a>, in March 2026, deliberately naming and governing the role. We are treating Claude\u2019s onboarding as a structured experiment, asking in public whether an AI co-worker can reduce the cognitive load on a small team without diluting authenticity or erasing local voice. But we are under no illusion that this is anything other than a design question that the evidence base cannot yet answer.</p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-the-flipped-ai-divide-is-the-central-equity-problem-for-global-health\">The flipped AI divide is the central equity problem for global health</h3>\n\n\n\n<p>Moutinho\u2019s \u201cflipped AI divide\u201d is the most precise description I have encountered of the equity challenge in global health AI. In the countries where The Geneva Learning Foundation works, access to advanced models is already limited by geofencing, pricing, and risk aversion by international organisations. When practitioners in these settings do use AI, they use general-purpose chatbots without pedagogical intent, institutional support, or safety standards. This is exactly the configuration that the OECD evidence shows produces performance gains without learning gains.</p>\n\n\n\n<p>Meanwhile, organisations in Geneva, New York, and Washington have access to purpose-built AI tools, teams of data scientists, and legal departments that can negotiate safety standards. The result is that the most resource-rich actors get AI that is designed to support human capability, while the practitioners who face the most severe challenges get AI that is designed for consumer engagement. This is the flipped AI divide in global health.</p>\n\n\n\n<p>Isotani\u2019s AIED Unplugged model offers a counterpoint that speaks directly to our work. His system proves that it is possible to design AI for resource-constrained environments at national scale, reaching half a million students with no student devices and no classroom internet. If it is possible in Brazilian public schools, it is possible in the health systems where we work. The design principle is the same one we apply at The Geneva Learning Foundation: the burden of adaptation must fall on technology designers, not on the practitioners and communities who are often already stretched to their limits.</p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-peer-learning-is-the-missing-architecture\">Peer learning is the missing architecture</h3>\n\n\n\n<p>Across two days of the OECD conference, one word barely appeared: peers. The conference discussed teachers, students, researchers, companies, and policymakers. It discussed tutoring, assessment, safety, and governance. What it did not discuss, with rare exceptions, was what happens when learners support each other, becoming both teachers and learners.</p>\n\n\n\n<p>This is the gap that our work fills. In the <a href=\"https://redasadki.me/2025/06/17/when-funding-shrinks-impact-must-grow-the-economic-case-for-peer-learning-networks/\" type=\"post\" id=\"20995\">peer learning networks that The Geneva Learning Foundation has built over a decade</a>, health workers develop context-specific projects, review each other\u2019s work using structured rubrics, and engage in facilitated dialogue that surfaces patterns across thousands of contexts. We envision AI not as a tutor or an oracle but as a co-worker that helps with tasks that peers have neither time nor bandwidth to perform at scale.</p>\n\n\n\n<p>Gasevic\u2019s experimental finding confirms the design logic we have been following. Students who developed their skills before AI was introduced achieved genuine synergy. In our networks, practitioners build their capacities through structured peer interaction before AI enters the picture. The human architecture comes first. AI amplifies and augments what the network has already built. Its boundaries are defined by the network.</p>\n\n\n\n<p>Beghetto\u2019s \u201cslow AI\u201d resonates with this approach. In a peer learning network, the \u201cproductive friction\u201d that commercial AI removes is precisely what the network is designed to generate. Peer review, facilitated dialogue, and iterative project development are all forms of friction that produce learning. If we strip these out and replace them with chatbot-generated feedback, we lose what makes the system work.</p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-a-leadership-agenda-for-day-2\">A leadership agenda for Day 2</h2>\n\n\n\n<p>Day 1 produced a leadership agenda focused on the performance-learning distinction, the need for pedagogy before technology, and the urgency of equity. Day 2 extends it.</p>\n\n\n\n<p>First, leaders must confront the self-replacement problem directly. Moutinho described it in young people. I see it in health and humanitarian professionals. The response is not to ban AI or to ignore it, but to create conditions in which practitioners can use AI openly and with pedagogical intent. This means moving from \u201cshadow AI\u201d to governed AI, as we are doing with Claude Cardot. It also means designing learning experiences that require practitioners to do the cognitive work before AI enters, not after.</p>\n\n\n\n<p>Second, leaders must demand evidence. Twenty-two causal studies is not a sufficient foundation for policy. In global health and humanitarian response, where the evidence base is even thinner, leaders should insist that any AI deployment in training or capacity-building includes a credible evaluation design. Efficiency gains are not learning gains. The two must be measured separately.</p>\n\n\n\n<p>Third, leaders must resist the flipped AI divide. If the most resource-constrained practitioners end up with unguided access to general-purpose chatbots while the most resource-rich organisations get purpose-built, safety-tested, pedagogy-driven AI tools, the result will be a deepening of the inequity that <a href=\"https://redasadki.me/2025/07/16/why-peer-learning-is-critical-to-survive-the-age-of-artificial-intelligence/\" type=\"link\" id=\"https://redasadki.me/2025/07/16/why-peer-learning-is-critical-to-survive-the-age-of-artificial-intelligence/\">peer learning networks are designed to overcome</a>. The Isotani model shows that another path is possible. Leaders should demand it.</p>\n\n\n\n<p>Fourth, leaders must invest in peer learning infrastructure alongside AI deployment. Every finding from the OECD conference confirms that AI is most powerful when embedded in human systems that provide the friction, the context, and the accountability that AI alone cannot supply. Peer learning networks are not optional. They are the architecture that determines whether AI amplifies human capability or replaces it.</p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-what-the-second-day-left-unresolved\">What the second day left unresolved</h2>\n\n\n\n<p>The second day of the OECD conference did not resolve the question that Moutinho raised. It sharpened it. If young people are preemptively replacing themselves, and if health workers in crisis settings are quietly delegating their situated knowledge to machines, then the question is not whether AI can help human beings learn and grow. It is whether we will design the systems that make that possible before the window closes.</p>\n\n\n\n<p>Guellec\u2019s observation that his own OECD chapter was outdated before the conference took place is not only a comment about the pace of change in AI. It is a warning about the pace of change required in every institution that claims to support learning. The evidence is now clear that doing nothing, or doing the wrong thing, is not neutral. It is actively harmful. And the people most at risk are, as always, those with the least institutional support and the most to lose.</p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-references\">References</h2>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Isotani S, Bittencourt II, Challco GC, Dermeval D, Mello RF. AIED Unplugged: Leapfrogging the Digital Divide to Reach the Underserved. In: Wang N, Rebolledo-Mendez G, Dimitrova V, Matsuda N, Santos OC, editors. Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners, Doctoral Consortium and Blue Sky. Cham: Springer Nature Switzerland; 2023. p. 772\u20139. (Communications in Computer and Information Science). <a href=\"https://doi.org/10.1007/978-3-031-36336-8_118\">https://doi.org/10.1007/978-3-031-36336-8_118</a></li>\n\n\n\n<li>Kusumegi K, Yang X, Ginsparg P, De Vaan M, Stuart T, Yin Y. Scientific production in the era of large language models. Science. 2025 Dec 18;390(6779):1240\u20133. <a href=\"https://doi.org/10.1126/science.adw3000\">https://doi.org/10.1126/science.adw3000</a></li>\n\n\n\n<li>OECD. OECD Digital Education Outlook 2026: Exploring Effective Uses of Generative AI in Education. OECD Publishing, 2026. <a href=\"https://doi.org/10.1787/062a7394-en\">https://doi.org/10.1787/062a7394-en</a>.</li>\n\n\n\n<li>Reda Sadki (2025). The great unlearning: notes on the Empower Learners for the Age of AI conference. Reda Sadki: Learning to make a difference. <a href=\"https://doi.org/10.59350/859ed-e8148\">https://doi.org/10.59350/859ed-e8148</a></li>\n\n\n\n<li>Reda Sadki (2025). Artificial intelligence, accountability, and authenticity: knowledge production and power in global health crisis. Reda Sadki: Learning to make a difference. <a href=\"https://doi.org/10.59350/w1ydf-gd85\">https://doi.org/10.59350/w1ydf-gd85</a></li>\n\n\n\n<li>Reda Sadki (2025). When funding shrinks, impact must grow: the economic case for peer learning networks. Reda Sadki: Learning to make a difference. <a href=\"https://doi.org/10.59350/redasadki.20995\">https://doi.org/10.59350/redasadki.20995</a></li>\n\n\n\n<li>Reda Sadki (2025). Why peer learning is critical to survive the Age of Artificial Intelligence. Reda Sadki: Learning to make a difference. <a href=\"https://doi.org/10.59350/redasadki.21123\">https://doi.org/10.59350/redasadki.21123</a></li>\n\n\n\n<li>Reda Sadki (2026). Introducing Claude Cardot, our first AI co-worker to support frontline health and humanitarian leaders. Reda Sadki: Learning to make a difference. <a href=\"https://doi.org/10.59350/6rjnm-1rd08\">https://doi.org/10.59350/6rjnm-1rd08</a></li>\n\n\n\n<li>Reda Sadki (2026). OECD Digital Education Outlook 2026: How can AI help human beings learn and grow?. Reda Sadki: Learning to make a difference. <a href=\"https://doi.org/10.59350/1bqm0-1d126\">https://doi.org/10.59350/1bqm0-1d126</a></li>\n</ol>\n","doi":"https://doi.org/10.59350/skb2r-wqp57","funding_references":null,"guid":"https://redasadki.me/?p=23278","id":"5143a891-fbd6-49b1-acd3-2e152fe370af","image":"https://redasadki.me/wp-content/uploads/2026/04/OECD-Digital-Education-Outlook-2026-Day-2.jpg","indexed":true,"indexed_at":1775203670,"language":"en","parent_doi":null,"published_at":1775203058,"reference":[{"id":"https://doi.org/10.1007/978-3-031-36336-8_118","unstructured":"Isotani S, Bittencourt II, Challco GC, Dermeval D, Mello RF. AIED Unplugged: Leapfrogging the Digital Divide to Reach the Underserved. In: Wang N, Rebolledo-Mendez G, Dimitrova V, Matsuda N, Santos OC, editors. Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners, Doctoral Consortium and Blue Sky. Cham: Springer Nature Switzerland; 2023. p. 772\u20139. (Communications in Computer and Information Science). https://doi.org/10.1007/978-3-031-36336-8_118"},{"id":"https://doi.org/10.1126/science.adw3000","unstructured":"Kusumegi K, Yang X, Ginsparg P, De Vaan M, Stuart T, Yin Y. Scientific production in the era of large language models. Science. 2025 Dec 18;390(6779):1240\u20133. https://doi.org/10.1126/science.adw3000"},{"id":"https://doi.org/10.1787/062a7394-en","unstructured":"OECD. OECD Digital Education Outlook 2026: Exploring Effective Uses of Generative AI in Education. OECD Publishing, 2026. https://doi.org/10.1787/062a7394-en."},{"id":"https://doi.org/10.59350/859ed-e8148","unstructured":"Reda Sadki (2025). The great unlearning: notes on the Empower Learners for the Age of AI conference. Reda Sadki: Learning to make a difference. https://doi.org/10.59350/859ed-e8148"},{"id":"https://doi.org/10.59350/w1ydf-gd85","unstructured":"Reda Sadki (2025). Artificial intelligence, accountability, and authenticity: knowledge production and power in global health crisis. Reda Sadki: Learning to make a difference. https://doi.org/10.59350/w1ydf-gd85"},{"id":"https://doi.org/10.59350/redasadki.20995","unstructured":"Reda Sadki (2025). When funding shrinks, impact must grow: the economic case for peer learning networks. Reda Sadki: Learning to make a difference. https://doi.org/10.59350/redasadki.20995"},{"id":"https://doi.org/10.59350/redasadki.21123","unstructured":"Reda Sadki (2025). Why peer learning is critical to survive the Age of Artificial Intelligence. Reda Sadki: Learning to make a difference. https://doi.org/10.59350/redasadki.21123"},{"id":"https://doi.org/10.59350/6rjnm-1rd08","unstructured":"Reda Sadki (2026). Introducing Claude Cardot, our first AI co-worker to support frontline health and humanitarian leaders. Reda Sadki: Learning to make a difference. https://doi.org/10.59350/6rjnm-1rd08"},{"id":"https://doi.org/10.59350/1bqm0-1d126","unstructured":"Reda Sadki (2026). OECD Digital Education Outlook 2026: How can AI help human beings learn and grow?. Reda Sadki: Learning to make a difference. https://doi.org/10.59350/1bqm0-1d126"}],"registered_at":0,"relationships":[],"rid":"7w31z-37595","status":"active","summary":"In my Day 1 article, I wrote that the OECD Digital Education Outlook 2026 conference documented performance gains alongside learning losses, efficiency alongside declining human competence, and the emergence of what Dragan Gasevic called \u201cmetacognitive laziness.\u201d I described a day that did not offer comfort.  Where the first day established the tension between performance and learning, the second day forced the question of what to do about it.","tags":["Artificial Intelligence","AI4Health","Andreas Schleicher","Empower Learners For The Age Of AI","George Siemens"],"title":"AI self-replacement: what happens when we delegate our thoughts to artificial intelligence?","updated_at":1775203258,"url":"https://redasadki.me/2026/04/03/ai-self-replacement-what-happens-when-we-delegate-our-thoughts-to-artificial-intelligence/","version":"v1"}},{"document":{"abstract":null,"archive_url":null,"authors":[{"affiliation":[{"id":"https://ror.org/013meh722","name":"University of Cambridge"}],"contributor_roles":[],"family":"Madhavapeddy","given":"Anil","url":"https://orcid.org/0000-0001-8954-2428"}],"blog":{"archive_collection":null,"archive_host":null,"archive_prefix":null,"archive_timestamps":null,"authors":null,"canonical_url":null,"category":"computerAndInformationSciences","community_id":"472a49be-dc61-4a17-97f0-d1ff17b0dadd","created_at":1760341563.110877,"current_feed_url":null,"description":null,"doi_as_guid":false,"favicon":"https://anil.recoil.org/assets/favicon.ico","feed_format":"application/feed+json","feed_url":"https://anil.recoil.org/perma.json","filter":null,"funding":null,"generator":"Other","generator_raw":"Other","home_page_url":"https://anil.recoil.org/notes","id":"1436e2f2-fbbf-4741-897f-5198070c7195","indexed":true,"issn":null,"language":"en","license":"https://creativecommons.org/licenses/by/4.0/legalcode","mastodon":null,"prefix":"10.59350","registered_at":0,"relative_url":null,"ror":null,"secure":true,"slug":"anil","status":"active","subfield":"1702","subfield_validated":null,"title":"Anil Madhavapeddy's feed","updated_at":1775288909.344299,"use_api":null,"use_mastodon":false,"user_id":null},"blog_name":"Anil Madhavapeddy's feed","blog_slug":"anil","content_html":"<p>After my <a href=\"https://anil.recoil.org/notes/aoah-2025\">December of agentic coding</a> sprint, I was left quite\n<a href=\"https://marvinh.dev/blog/ddosing-the-human-brain/\">frazzled</a> but also with a\npractical problem. I've got two kinds of libraries: the ones I care about (and\nhandcraft), and the wild experiments that look perfectly formed but are in fact just\n(well typed) slop. After <a href=\"https://anil.recoil.org/notes/claude-copilot-sandbox\">a year</a> of doing this, it's obvious that the <em>quality</em> of generated code also varies dramatically as\nmodels steadily improve and agentic harnesses improve context management.</p>\n<p>This post is about an <strong><a href=\"https://github.com/avsm/ocaml-ai-disclosure\">ocaml-ai-disclosure proposal</a></strong> I put together to help track this in OCaml using metadata and <a href=\"https://ocaml.org/manual/5.3/attributes.html\">extension attributes</a> in source code.</p>\n<h2 id=\"the-eu-is-mandating-what-this-summer\"><a aria-hidden=\"true\" class=\"anchor\" href=\"https://anil.recoil.org/notes/opam-ai-disclosure/#the-eu-is-mandating-what-this-summer\"></a>The EU is mandating what this summer?!</h2>\n<p>Toby Jaffey pointed\nme to the <a href=\"https://www.w3.org/community/ai-content-disclosure/\">W3C AI Content Disclosure</a>\n<a href=\"https://anil.recoil.org/notes/2026w13\">last week</a>. The bit that\nproperly surprised me was a legal snippet buried in their README:</p>\n<blockquote>\n<p>The EU AI Act Article 50 (effective August 2026) requires that AI-generated text content be \"marked in a machine-readable format and detectable as artificially generated or manipulated.\"\n<cite>-- <a href=\"https://github.com/dweekly/ai-content-disclosure?tab=readme-ov-file\">ai-content-disclosure</a>, David E. Weekly, 2026</cite></p>\n</blockquote>\n<p>This summer!!! Whether source code falls under \"text content\" is an <a href=\"https://eur-lex.europa.eu/eli/reg/2024/1689/oj\">open\nquestion</a> that hasn't been\naddressed in existing legal commentary as far as I can tell (nor can I read the\nraw 300+ pages to figure it out for myself).  However, regardless of how lawyers eventually\nparse this, voluntary disclosure for code seems like a sensible thing to do anyway.</p>\n<p>I've therefore put together an <strong><a href=\"https://github.com/avsm/ocaml-ai-disclosure\">ocaml-ai-disclosure</a></strong> repository contains a draft specification and OCaml reference tooling for voluntary, machine-readable AI content disclosure in OCaml code. I'm interested in both thoughts from the OCaml community but also from other language ecosystems. Weirdly, I can't find a single other programming language that's proposed anything for source code after some searching.</p>\n<p><a href=\"https://eur-lex.europa.eu/eli/reg/2024/1689/oj\"> <img alt=\"%c\" src=\"https://anil.recoil.org/images/eu-ai-act-1.webp\" title=\"Not even reading the AI Act in my mothertongue shed light on the matter. (Ok ok, it's about laying down harmonised rules on AI and amending existing Regulations)\"/> </a></p>\n<h2 id=\"ai-disclosure-for-ocaml-is-pretty-easy\"><a aria-hidden=\"true\" class=\"anchor\" href=\"https://anil.recoil.org/notes/opam-ai-disclosure/#ai-disclosure-for-ocaml-is-pretty-easy\"></a>AI Disclosure for OCaml is pretty easy</h2>\n<p>The OCaml ecosystem's accumulating code with varying degrees of AI involvement, but currently no machine-readable way to signal it. We obviously need to be very careful about how we mix this code into the <a href=\"https://github.com/ocaml/opam-repository\">commons</a>, because the usual social signals we use to review packages are basically useless now.</p>\n<p>However a binary AI \"yes/no\" flag doesn't capture the reality of how people actually work with these tools. The code I wrote during <a href=\"https://anil.recoil.org/notes/aoah-2025\">AoAH</a> ranged from a one-shot <em>\"CC generated the whole module from a one-line prompt\"</em> to <em>\"I wrote the core logic by hand and Claude sorted the pretty-printer boilerplate\"</em> or even <em>\"<a href=\"https://toao.com/blog/check-with-gemini\">I got CC to test with Gemini</a>\"</em>.</p>\n<p>My proposal is extremely simple, here's how it works...</p>\n<h3 id=\"package-disclosures\"><a aria-hidden=\"true\" class=\"anchor\" href=\"https://anil.recoil.org/notes/opam-ai-disclosure/#package-disclosures\"></a>Package Disclosures</h3>\n<p>An opam package can declare its disclosure using extension fields:</p>\n<pre><code>x-ai-disclosure: \"ai-assisted\"\nx-ai-model: \"claude-opus-4-6\"\nx-ai-provider: \"Anthropic\"\n</code></pre>\n<p>Note: This may just become a list of values in the final proposal, but you get the idea.</p>\n<h3 id=\"ocaml-module-level\"><a aria-hidden=\"true\" class=\"anchor\" href=\"https://anil.recoil.org/notes/opam-ai-disclosure/#ocaml-module-level\"></a>OCaml Module level</h3>\n<p>OCaml supports extension attributes, which we use via a floating attribute that applies to the entire compilation unit:</p>\n<pre><code class=\"language-ocaml\">[@@@ai_disclosure \"ai-generated\"]\n[@@@ai_model \"claude-opus-4-6\"]\n[@@@ai_provider \"Anthropic\"]\n\nlet foo = ...\nlet bar = ...\n</code></pre>\n<p>These can also be scoped more finely via declaration attributes that apply to a single binding:</p>\n<pre><code class=\"language-ocaml\">[@@@ai_disclosure \"ai-assisted\"]\n\nlet human_written x = ...\n\nlet ai_helper y =\n  ...\n[@@ai_disclosure \"ai-generated\"]\n</code></pre>\n<p>Disclosure follows a nearest-ancestor inheritance model like the W3C HTML proposal, whereby an explicit annotation overrides the inherited value.</p>\n<p>One detail I'm quite pleased with is that <code>.mli</code> and <code>.ml</code> files are annotated independently, which means that one workflow I use quite a bit of writing the interface files first can be tracked separately from the implementations themselves.</p>\n<h3 id=\"the-disclosure-vocabulary\"><a aria-hidden=\"true\" class=\"anchor\" href=\"https://anil.recoil.org/notes/opam-ai-disclosure/#the-disclosure-vocabulary\"></a>The disclosure vocabulary</h3>\n<p>I use the same four levels as the W3C vocabulary, which works well enough for HTML:</p>\n<div role=\"region\"><table>\n<tr>\n<th>Value</th>\n<th>Meaning</th>\n</tr>\n<tr>\n<td><code>none</code></td>\n<td>No AI involvement</td>\n</tr>\n<tr>\n<td><code>ai-assisted</code></td>\n<td>Human-authored, AI edited or refined</td>\n</tr>\n<tr>\n<td><code>ai-generated</code></td>\n<td>AI-generated with human prompting and review</td>\n</tr>\n<tr>\n<td><code>autonomous</code></td>\n<td>AI-generated without human oversight</td>\n</tr>\n</table></div><p>I treat the absence of annotation as \"unknown\", not \"none\". The <code>none</code> value exists for authors who <em>want</em> to positively assert human authorship, perhaps because their project's policy requires it or because they want reviewers to know this particular module was deliberately hand-written. Tools may also choose to spelunk back through pre-2022 code and add <code>none</code> automatically where it's obvious.</p>\n<p>If a module contains both human-written and AI-generated bits, you can annotate\nat the package level and add overrides directly in code.  OCaml's module system\nand attributes gives us a natural hierarchy for this.</p>\n<h3 id=\"model-provenance\"><a aria-hidden=\"true\" class=\"anchor\" href=\"https://anil.recoil.org/notes/opam-ai-disclosure/#model-provenance\"></a>Model provenance</h3>\n<p>Each annotation can also optionally carry provenance metadata:</p>\n<ul>\n<li><code>ai_model</code> (the API model identifier, like <code>claude-opus-4-6</code> or <code>gpt-4o</code>)</li>\n<li><code>ai_provider</code> (like <code>Anthropic</code> or <code>OpenAI</code>).</li>\n</ul>\n<p><a href=\"https://mynameismwd.org\">Michael Dales</a> pointed out it's quite common to use multiple models (e.g. to cross\ntest), so these attributes can be repeated when multiple models contributed.</p>\n<h2 id=\"the-programmer-burden-is-minimal\"><a aria-hidden=\"true\" class=\"anchor\" href=\"https://anil.recoil.org/notes/opam-ai-disclosure/#the-programmer-burden-is-minimal\"></a>The programmer burden is minimal</h2>\n<p>The nice thing about this proposal is that there's <em>no</em> overhead to a programmer that chooses not to use AI assistance.</p>\n<p>For those that do, I've got a <a href=\"https://github.com/avsm/ocaml-claude-marketplace/blob/main/plugins/ocaml-dev/skills/ai-disclosure/SKILL.md\">Claude Skill ocaml-dev:ai-disclosure</a>\nthat instructs the agent to add the right annotations in.  So when Claude\ngenerates OCaml code in my sessions, it now inserts the attributes and also\nmaintains the <code>.opam.template</code> files.</p>\n<p>During code review, I read the AI-generated code and edit away to (hopefully) improve it, and downgrade <code>ai-generated</code> to <code>ai-assisted</code> on the way.  If I've substantially rewritten the code then I just remove the annotation and fully claim it.</p>\n<p>The key principle is that disclosure reflects the <em>current state of the code</em> to make it easier for a human to claim responsibility. A human who has thoroughly reviewed, understood, and rewritten a piece of code may reasonably call it their own. This is not my legal opinion, just a moral, informal and pragmatic one!</p>\n<h2 id=\"what-this-isnt\"><a aria-hidden=\"true\" class=\"anchor\" href=\"https://anil.recoil.org/notes/opam-ai-disclosure/#what-this-isnt\"></a>What this isn't</h2>\n<p>A few things worth being explicit about after discussions around <a href=\"https://anil.recoil.org/projects/oxcaml\">my group</a> on the matter:</p>\n<ul>\n<li>\n<p>It's not a judgement on whether AI code is good or bad. The goal is a transparent, machine-readable signal so that consumers of the code (be they humans, puppies, licence checkers, package managers, CI systems, whatever) can apply their own policies.</p>\n</li>\n<li>\n<p>We don't use git for this. A human may commit AI-generated code, or an AI agent may commit code that was human-reviewed and hacked and slashed enough to be considered rewritten before the commit. Rebases and squash also destroy attribution based on commits. Source-level attributes survive all these operations.</p>\n</li>\n<li>\n<p>It's not mandatory. The whole point is voluntary adoption. I have noticed a vague reluctance from the people I've talked to to declare, as they'll feel they're being judged. If the OCaml community decides this is useful, adoption will happen naturally. If not, then it'll just be me using it and I'm fine with that!</p>\n</li>\n</ul>\n<h2 id=\"whats-next\"><a aria-hidden=\"true\" class=\"anchor\" href=\"https://anil.recoil.org/notes/opam-ai-disclosure/#whats-next\"></a>What's next</h2>\n<p>I'm starting by integrating this into my own <a href=\"https://anil.recoil.org/notes/aoah-2025\">libraries</a> as a test bed. The Claude Code <a href=\"https://github.com/avsm/ocaml-claude-marketplace\">marketplace skill</a> is already available if you want to try the automated annotation in your own sessions.</p>\n<p>On the tooling side, there are several integration points I'd like to see if this idea has legs:</p>\n<ul>\n<li>odoc could render disclosure metadata alongside module documentation, perhaps using <a href=\"https://jon.recoil.org/blog/2026/03/weeknotes-2026-13.html\">the odoc plugin</a> system that <a href=\"https://jon.recoil.org\">Jon Ludlam</a> has been designing.</li>\n<li>merlin or ocaml-lsp could surface disclosure attributes in hover information in the IDE, giving you a quick 'trust signal' while reading other people's code.</li>\n<li>dune could gain native support for the <code>(ai_disclosure)</code> stanza to make the opam file generation easier.</li>\n<li>opam could eventually use disclosure fields during version solving. I think it'd be useful to have a solver constraint that prefers packages with human-reviewed code where available, and only fall back to AI if nothing else works.</li>\n</ul>\n<p>The full draft specification, FAQ, and reference implementation are at <strong><a href=\"https://github.com/avsm/ocaml-ai-disclosure\">github.com/avsm/ocaml-ai-disclosure</a></strong>.\nI'd love feedback on the spec. File issues on the repo or in the <a href=\"https://discuss.ocaml.org/t/a-proposal-for-voluntary-ai-disclosure-in-ocaml-code/17950\">OCaml Discussion thread</a>.</p><h1>References</h1><ul><li>Madhavapeddy (2026). .plan-26-13: Oxidised, standardised, and syndicated. <a href=\"https://doi.org/10.59350/ddx61-wd948\" target=\"_blank\"><i>10.59350/ddx61-wd948</i></a></li>\n<li>Madhavapeddy (2025). Oh my Claude, we need agentic copilot sandboxing right now. <a href=\"https://doi.org/10.59350/aecmt-k3h39\" target=\"_blank\"><i>10.59350/aecmt-k3h39</i></a></li></ul>","doi":"https://doi.org/10.59350/cxypn-ysv27","funding_references":null,"guid":"https://doi.org/10.59350/cxypn-ysv27","id":"ee0b8845-5954-47c8-bb7f-2f7aa1919276","image":null,"indexed":true,"indexed_at":1775238159,"language":"en","parent_doi":null,"published_at":1775174400,"reference":[{"cito":["cito:citesAsRelated"],"id":"https://doi.org/10.59350/ddx61-wd948","unstructured":" <b>[cito:citesAsRelated]</b>"},{"cito":["cito:citesAsRelated"],"id":"https://doi.org/10.59350/aecmt-k3h39","unstructured":" <b>[cito:citesAsRelated]</b>"}],"registered_at":0,"relationships":[],"rid":"a64qc-zfw45","status":"active","summary":"After my December of agentic coding sprint, I was left quite frazzled but also with a practical problem. I've got two kinds of libraries: the ones I care about (and handcraft), and the wild experiments that look perfectly formed but are in fact just (well typed) slop.","tags":["Ai","Ocaml","Oxcaml","Standards","Policy"],"title":"A Proposal for Voluntary AI Disclosure in OCaml Code","updated_at":1775174400,"url":"https://anil.recoil.org/notes/opam-ai-disclosure","version":"v1"}},{"document":{"abstract":null,"archive_url":null,"authors":[{"affiliation":[{"name":"Front Matter"}],"contributor_roles":[],"family":"Fenner","given":"Martin","url":"https://orcid.org/0000-0003-1419-2405"}],"blog":{"archive_collection":22096,"archive_host":null,"archive_prefix":"https://wayback.archive-it.org/22096/20231101172748/","archive_timestamps":[20231101172748,20240501180447,20241101172601],"authors":[{"name":"Martin Fenner","url":"https://orcid.org/0000-0003-1419-2405"}],"canonical_url":null,"category":"computerAndInformationSciences","community_id":"91dd2c24-5248-4510-9c2b-30b772bf8b60","created_at":1672561153,"current_feed_url":"","description":"The Front Matter Blog covers the intersection of science and technology since 2007.","doi_as_guid":false,"favicon":"https://rogue-scholar.org/api/communities/15a362ea-8138-42b8-917f-1840a92addf8/logo","feed_format":"application/atom+xml","feed_url":"https://blog.front-matter.de/atom","filter":null,"funding":null,"generator":"Ghost","generator_raw":"Ghost 5.52","home_page_url":"https://blog.front-matter.de","id":"74659bc5-e36e-4a27-901f-f0c8d5769cb8","indexed":null,"issn":"2749-9952","language":"en","license":"https://creativecommons.org/licenses/by/4.0/legalcode","mastodon":"https://hachyderm.io/@mfenner","prefix":"10.53731","registered_at":1729685319,"relative_url":null,"ror":null,"secure":true,"slug":"front_matter","status":"active","subfield":"1710","subfield_validated":null,"title":"Front Matter","updated_at":1775288960.43165,"use_api":true,"use_mastodon":true,"user_id":"8498eaf6-8c58-4b58-bc15-27eda292b1aa"},"blog_name":"Front Matter","blog_slug":"front_matter","content_html":"<h2 id=\"blogs-added-to-rogue-scholar\">Blogs added to Rogue Scholar</h2><p>One blog was added in March. This increases the number of participating blogs (after adjusting for retired blogs) to&nbsp;<strong>186 </strong>, the number of archived posts has grown to&nbsp;<strong>49,606</strong>&nbsp;\u2013 Rogue Scholar is getting closer to the big milestones of 200 participating blogs with 50,000 posts!</p><h3 id=\"orion-dbs\"><a href=\"https://rogue-scholar.org/communities/orion\" rel=\"noreferrer\">ORION-DBs</a></h3><p><em>Library and Information Sciences, English.</em><br><a href=\"https://orion-dbs.community/blog/\">https://orion-dbs.community/blog/</a></p><p>The a backlog of new blog submissions is still not resolved, so please be patient. You can always reach out via&nbsp;<a href=\"https://join.slack.com/t/rogue-scholar/shared_invite/zt-2ylpq1yoy-o~TkxDarfz5LSMhGSCYtiA\" rel=\"noreferrer\">Slack</a>,&nbsp;<a href=\"mailto:info@rogue-scholar.org\" rel=\"noreferrer\">email</a>,&nbsp;<a href=\"https://wisskomm.social/@rogue_scholar\" rel=\"noreferrer\">Mastodon</a>, or&nbsp;<a href=\"https://bsky.app/profile/rogue-scholar.bsky.social\" rel=\"noreferrer\">Bluesky</a>&nbsp;to ask about the status of your submission.</p><h2 id=\"technical-updates\">Technical Updates</h2><p>One focus of the technical work in March was on&nbsp;infrastructure improvements. The monitoring of the Rogue Scholar infrastructure was improved by deploying a <a href=\"https://doi.org/10.53731/3w24g-cdz85\" rel=\"noreferrer\">self-hosted observability platform</a> for logs, metrics and errors with dashboards and alerting using the Grafana open source platform:</p><figure class=\"kg-card kg-image-card\"><img src=\"https://blog.front-matter.de/content/images/2026/04/image.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1600\" height=\"793\" srcset=\"https://blog.front-matter.de/content/images/size/w600/2026/04/image.png 600w, https://blog.front-matter.de/content/images/size/w1000/2026/04/image.png 1000w, https://blog.front-matter.de/content/images/2026/04/image.png 1600w\" sizes=\"(min-width: 720px) 720px\"></figure><p>Th dashboard for key metadata metrics initially released in March 2025 was improved visually and <a href=\"https://doi.org/10.53731/809xc-y7r79\" rel=\"noreferrer\">launched for communities</a>, including blog communities:</p><figure class=\"kg-card kg-image-card\"><img src=\"https://blog.front-matter.de/content/images/2026/04/image-1.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1600\" height=\"610\" srcset=\"https://blog.front-matter.de/content/images/size/w600/2026/04/image-1.png 600w, https://blog.front-matter.de/content/images/size/w1000/2026/04/image-1.png 1000w, https://blog.front-matter.de/content/images/2026/04/image-1.png 1600w\" sizes=\"(min-width: 720px) 720px\"></figure><p>This makes it much easier for readers to get an overview for each blog participating in Rogue Scholar, and for blog authors to see gaps in metadata coverage that they can improve.</p><p>This week the <a href=\"https://doi.org/10.53731/dp6ra-trw41\" rel=\"noreferrer\">blog self-management in Rogue Scholar was improved</a>, enabling blog owners to update all relevant blog metadata.</p><h2 id=\"community-updates\">Community Updates</h2><p>The technical updates mentioned above are part of an effort to align Rogue Scholar better with the <a href=\"https://inveniordm.docs.cern.ch/\" rel=\"noreferrer\">InvenioRDM repository platform</a>. This will make it easier in the long run to sustain and update Rogue Scholar, as an increasing proportion of the required functionality is built into InvenioRDM and developed and used by other repositories.</p><p>Please use&nbsp;<a href=\"https://join.slack.com/t/rogue-scholar/shared_invite/zt-2ylpq1yoy-o~TkxDarfz5LSMhGSCYtiA\" rel=\"noreferrer\">Slack</a>,&nbsp;<a href=\"mailto:info@rogue-scholar.org\" rel=\"noreferrer\">email</a>,&nbsp;<a href=\"https://wisskomm.social/@rogue_scholar\" rel=\"noreferrer\">Mastodon</a>, or&nbsp;<a href=\"https://bsky.app/profile/rogue-scholar.bsky.social\" rel=\"noreferrer\">Bluesky</a>&nbsp;if you have any questions or comments.</p><div class=\"kg-card kg-callout-card kg-callout-card-blue\"><div class=\"kg-callout-text\">Rogue Scholar is a scholarly infrastructure that is free for all authors and readers. You can support Rogue Scholar with a one-time or recurring&nbsp;<a href=\"https://ko-fi.com/rogue_scholar\" rel=\"noreferrer\">donation</a>&nbsp;or by becoming a sponsor.</div></div><h2 id=\"references\">References</h2><ol><li>Fenner, M. (2026, March 16). Increasing operational transparency in Rogue Scholar. <em>Front Matter</em>. <a href=\"https://doi.org/10.53731/3w24g-cdz85\">https://doi.org/10.53731/3w24g-cdz85</a></li><li>Fenner, M. (2026, March 26). Introducing Rogue Scholar community dashboards. <em>Front Matter</em>. <a href=\"https://doi.org/10.53731/809xc-y7r79\">https://doi.org/10.53731/809xc-y7r79</a></li><li>Fenner, M. (2026, April 1). Rogue Scholar improves blog self-management. <em>Front Matter</em>. <a href=\"https://doi.org/10.53731/dp6ra-trw41\">https://doi.org/10.53731/dp6ra-trw41</a></li></ol>","doi":"https://doi.org/10.53731/wfp26-6ej12","funding_references":null,"guid":"https://doi.org/10.53731/wfp26-6ej12","id":"a8281cac-3d8f-453f-a078-3e2cd2b74251","image":"https://images.unsplash.com/photo-1573500883557-6049a3ab38b6?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDQ1fHxlYXN0ZXJ8ZW58MHx8fHwxNzc1MTQzNTcwfDA&ixlib=rb-4.1.0&q=80&w=2000","indexed":true,"indexed_at":1775145877,"language":"en","parent_doi":null,"published_at":1775145556,"reference":[{"id":"https://doi.org/10.53731/3w24g-cdz85","type":"BlogPost","unstructured":"Fenner, M. (2026, March 16). Increasing operational transparency in Rogue Scholar. <i>Front Matter</i>. https://doi.org/10.53731/3w24g-cdz85"},{"id":"https://doi.org/10.53731/809xc-y7r79","type":"BlogPost","unstructured":"Fenner, M. (2026, March 26). Introducing Rogue Scholar community dashboards. <i>Front Matter</i>. https://doi.org/10.53731/809xc-y7r79"},{"id":"https://doi.org/10.53731/dp6ra-trw41","type":"BlogPost","unstructured":"Fenner, M. (2026, April 1). Rogue Scholar improves blog self-management. <i>Front Matter</i>. https://doi.org/10.53731/dp6ra-trw41"}],"registered_at":0,"relationships":[],"rid":"433q3-rg192","status":"active","summary":"Blogs added to Rogue Scholar  One blog was added in March.","tags":["Rogue Scholar","Newsletter"],"title":"Rogue Scholar Newsletter March 2026","updated_at":1775145556,"url":"https://blog.front-matter.de/posts/rogue-scholar-newsletter-march-2026/","version":"v1"}},{"document":{"abstract":null,"archive_url":null,"authors":[{"contributor_roles":[],"family":"Turner","given":"Stephen D."}],"blog":{"archive_collection":null,"archive_host":null,"archive_prefix":null,"archive_timestamps":null,"authors":[{"name":"Stephen Turner"}],"canonical_url":null,"category":"biologicalSciences","community_id":"382941a7-2ffa-41df-8bbb-5f772188517f","created_at":1734172613,"current_feed_url":null,"description":"A practicing data scientist's take on AI, genomics, biosecurity, and the ways AI is reshaping how science gets done. Weekly updates from the field. Occasional notes on programming.","doi_as_guid":false,"favicon":null,"feed_format":"application/rss+xml","feed_url":"https://blog.stephenturner.us/feed","filter":null,"funding":null,"generator":"Substack","generator_raw":"Substack","home_page_url":"https://blog.stephenturner.us/","id":"bffe125c-3dfa-4f25-998f-e62878677c7c","indexed":true,"issn":null,"language":"en","license":"https://creativecommons.org/licenses/by/4.0/legalcode","mastodon":"https://bsky.app/profile/stephenturner.us","prefix":"10.59350","registered_at":0,"relative_url":null,"ror":null,"secure":true,"slug":"stephenturner","status":"active","subfield":"1311","subfield_validated":true,"title":"Paired Ends","updated_at":1775289119.319881,"use_api":null,"use_mastodon":false,"user_id":"ae63ef98-7475-4cc1-b3eb-244d5e096f0f"},"blog_name":"Paired Ends","blog_slug":"stephenturner","content_html":"<p>Earlier this week I wrote about a <a href=\"https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/enb2.70003\">paper</a> by Jacob Beal (Raytheon BBN Technologies) and Tessa Alexanian (International Biosecurity and Biosafety Initiative for Science, IBBIS) on creating enforceable biosecurity standards for nucleic acid providers. </p><div class=\"digest-post-embed\" data-attrs=\"{&quot;nodeId&quot;:&quot;da4d1060-ea03-4d5a-90cb-639598542e33&quot;,&quot;caption&quot;:&quot;Jacob Beal (Raytheon BBN Technologies) and Tessa Alexanian (International Biosecurity and Biosafety Initiative for Science) published a paper late last year:&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Creating Enforceable Biosecurity Standards for Nucleic Acid Providers&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:1536121,&quot;name&quot;:&quot;Stephen D. Turner&quot;,&quot;bio&quot;:&quot;https://stephenturner.us/&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!WGQE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1706730-c948-4acf-9c45-b14b4e3da1b9_651x651.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-03-30T13:31:54.951Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!OlIC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadf87083-7065-457d-a497-5a9ce7d6287f_2128x798.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://blog.stephenturner.us/p/enforceable-biosecurity-standards-nucleic-acid-providers&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:182944060,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:2,&quot;comment_count&quot;:2,&quot;publication_id&quot;:161890,&quot;publication_name&quot;:&quot;Paired Ends&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!hfDI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F894081de-334e-4173-8a0c-e64762c2c838_1030x1030.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}\"></div><p>It\u2019s a good paper, and I recommend reading it! I noted toward the end of the post that the customer screening side felt a bit undercooked. <a href=\"https://tessa.fyi/\">Tessa Alexanian</a>, one of the paper\u2019s coauthors, <a href=\"https://blog.stephenturner.us/p/enforceable-biosecurity-standards-nucleic-acid-providers/comment/236846598\">left a comment</a> (thanks Tessa!) pointing me to <a href=\"https://ibbis.bio/translating-customer-screening-guidance-into-practical-tools/\">additional work</a> she and Sarah Carter had done on translating customer screening guidance into practical tools, and to a <a href=\"https://www.biorxiv.org/content/10.64898/2026.02.27.708645v1\">new preprint from Acelas et al.</a> evaluating AI-assisted customer verification for synthetic nucleic acid screening.</p><blockquote><p><strong>Acelas, A., Palya, H., Flyangolts, K., Fady, P. E., &amp; Nelson, C. (2026). Evaluating AI-Assisted Customer Verification for Synthetic Nucleic Acid Screening. bioRxiv 2026.02.27.708645; doi: <a href=\"https://doi.org/10.64898/2026.02.27.708645\">https://doi.org/10.64898/2026.02.27.708645</a>.</strong></p></blockquote><p>Here\u2019s the problem the paper addresses: When someone orders a synthetic nucleic acid that matches a sequence of concern, the provider needs to verify that the customer is who they say they are and has a legitimate reason to order it. This <em>legitimacy screening</em> involves checking institutional affiliations, email domains, sanctions lists, and relevant publications or patents. It\u2019s tedious, largely mechanical work, and the cost discourages adoption. Legitimacy screening runs roughly ten times more expensive per order than sequence screening alone.</p><p>Acelas et al. tested 5 LLMs (Claude Sonnet 4, Gemini 2.5 Pro, Grok 4, GLM 4.6, and MiniMax M2) on these verification tasks against a human baseline, using 41 customer profiles paired with simulated orders for sequences of concern. The best-performing model, Gemini 2.5 Pro equipped with bibliographic and sanctions APIs, achieved a 90% overall pass rate compared to about 80% for human screeners. Total cost per customer dropped from $14.04 for manual screening to $1.18 with AI assistance. For the information-gathering tasks alone (excluding human review of the final decision), the average was $0.23 per customer, roughly 50 times cheaper.</p><div class=\"captioned-image-container\"><figure><a class=\"image-link image2 is-viewable-img\" target=\"_blank\" href=\"https://substackcdn.com/image/fetch/$s_!SoOb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png\" data-component-name=\"Image2ToDOM\"><div class=\"image2-inset\"><picture><source type=\"image/webp\" srcset=\"https://substackcdn.com/image/fetch/$s_!SoOb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png 424w, https://substackcdn.com/image/fetch/$s_!SoOb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png 848w, https://substackcdn.com/image/fetch/$s_!SoOb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png 1272w, https://substackcdn.com/image/fetch/$s_!SoOb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png 1456w\" sizes=\"100vw\"><img src=\"https://substackcdn.com/image/fetch/$s_!SoOb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png\" width=\"651\" height=\"556.1233183856502\" data-attrs=\"{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:762,&quot;width&quot;:892,&quot;resizeWidth&quot;:651,&quot;bytes&quot;:125845,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.stephenturner.us/i/192939021?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" class=\"sizing-normal\" alt=\"\" srcset=\"https://substackcdn.com/image/fetch/$s_!SoOb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png 424w, https://substackcdn.com/image/fetch/$s_!SoOb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png 848w, https://substackcdn.com/image/fetch/$s_!SoOb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png 1272w, https://substackcdn.com/image/fetch/$s_!SoOb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png 1456w\" sizes=\"100vw\" fetchpriority=\"high\"></picture><div class=\"image-link-expand\"><div class=\"pencraft pc-display-flex pc-gap-8 pc-reset\"><button tabindex=\"0\" type=\"button\" class=\"pencraft pc-reset pencraft icon-container restack-image\"><svg role=\"img\" width=\"20\" height=\"20\" viewBox=\"0 0 20 20\" fill=\"none\" stroke-width=\"1.5\" stroke=\"var(--color-fg-primary)\" stroke-linecap=\"round\" stroke-linejoin=\"round\" xmlns=\"http://www.w3.org/2000/svg\"><g><title></title><path d=\"M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882\"></path></g></svg></button><button tabindex=\"0\" type=\"button\" class=\"pencraft pc-reset pencraft icon-container view-image\"><svg xmlns=\"http://www.w3.org/2000/svg\" width=\"20\" height=\"20\" viewBox=\"0 0 24 24\" fill=\"none\" stroke=\"currentColor\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\" class=\"lucide lucide-maximize2 lucide-maximize-2\"><polyline points=\"15 3 21 3 21 9\"></polyline><polyline points=\"9 21 3 21 3 15\"></polyline><line x1=\"21\" x2=\"14\" y1=\"3\" y2=\"10\"></line><line x1=\"3\" x2=\"10\" y1=\"21\" y2=\"14\"></line></svg></button></div></div></div></a><figcaption class=\"image-caption\">Table 2 from <a href=\"https://www.biorxiv.org/content/10.64898/2026.02.27.708645v1.full\">Acelas 2026</a>: Per-customer screening costs and processing times. \u201cInformation gathering\u201d covers Tasks 1\u20135 only; \u201ctotal cost\u201d adds the time cost of human review of the AI-generated report. For human baselines, these phases were not separated, so only totals are reported. Human costs estimated at $54/hour based on advertised salaries at a large DNA synthesis provider. AI costs include per-token API pricing and Tavily web search queries ($0.08/query); other tools were cost-free. All figures are averages across 41 customer profiles.</figcaption></figure></div><p>A couple things stood out. First, cost and performance were uncorrelated across models (Section 3.2 of the paper). The best model, Gemini 2.5 Pro, was also the second cheapest. Open-source models with lower per-token pricing lost their cost advantage through higher token consumption and more search queries. Second, giving models access to specialized tools (ORCID, Europe PMC, a sanctions list API) helped on most tasks but actually hurt on background work search, because models with API access performed fewer web searches and missed patents and news articles not indexed in academic databases (Section 3.1). Third, geographic variation in error rates. Chinese customers had notably higher missed-flag rates on email domain verification, largely because researchers there more often use personal rather than institutional email addresses (Section 3.3.1).</p><p>The authors are careful to note that the final ship-or-reject decision should stay with humans. AI handles the information gathering but a person decides what to do with it. This feels like the right framing, and as Tessa noted in her comment, the emergence of tools like <a href=\"https://github.com/alejoacelas/api-cliver\">Cliver</a> (the screening API released alongside this paper), means providers increasingly don\u2019t have to build this capability from scratch. That lowers the bar for adopting customer screening, which in turn makes it more reasonable to expect higher standards across the industry.</p><p class=\"button-wrapper\" data-attrs=\"{&quot;url&quot;:&quot;https://blog.stephenturner.us/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}\" data-component-name=\"ButtonCreateButton\"><a class=\"button primary\" href=\"https://blog.stephenturner.us/subscribe?\"><span>Subscribe now</span></a></p>","doi":"https://doi.org/10.59350/6xzxd-5kb71","funding_references":null,"guid":"192939021","id":"4061d058-d77f-497e-afbc-99776b3bd489","image":"https://substackcdn.com/image/fetch/$s_!SoOb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png","indexed":true,"indexed_at":1775130458,"language":"en","parent_doi":null,"published_at":1775129325,"reference":[],"registered_at":0,"relationships":[],"rid":"d3psg-jz607","status":"active","summary":"A new preprint shows AI can handle legitimacy verification at a fraction of the cost.","tags":["Biosecurity","AI"],"title":"AI-Assisted Customer Screening for DNA Synthesis Orders","updated_at":1775129325,"url":"https://blog.stephenturner.us/p/ai-customer-screening-dna-synthesis","version":"v1"}},{"document":{"abstract":null,"archive_url":null,"authors":[{"name":"Stephen Turner"}],"blog":{"archive_collection":null,"archive_host":null,"archive_prefix":null,"archive_timestamps":null,"authors":[{"name":"Stephen Turner"}],"canonical_url":null,"category":"biologicalSciences","community_id":"382941a7-2ffa-41df-8bbb-5f772188517f","created_at":1734172613,"current_feed_url":null,"description":"A practicing data scientist's take on AI, genomics, biosecurity, and the ways AI is reshaping how science gets done. Weekly updates from the field. Occasional notes on programming.","doi_as_guid":false,"favicon":null,"feed_format":"application/rss+xml","feed_url":"https://blog.stephenturner.us/feed","filter":null,"funding":null,"generator":"Substack","generator_raw":"Substack","home_page_url":"https://blog.stephenturner.us/","id":"bffe125c-3dfa-4f25-998f-e62878677c7c","indexed":true,"issn":null,"language":"en","license":"https://creativecommons.org/licenses/by/4.0/legalcode","mastodon":"https://bsky.app/profile/stephenturner.us","prefix":"10.59350","registered_at":0,"relative_url":null,"ror":null,"secure":true,"slug":"stephenturner","status":"active","subfield":"1311","subfield_validated":true,"title":"Paired Ends","updated_at":1775289119.319881,"use_api":null,"use_mastodon":false,"user_id":"ae63ef98-7475-4cc1-b3eb-244d5e096f0f"},"blog_name":"Paired Ends","blog_slug":"stephenturner","content_html":"<p><em>Hello, friends. This recap comes a day early because I\u2019ll be leaving tomorrow for a long overdue holiday in France. No updates next week. Au revoir mes amis. </em>\ud83c\uddeb\ud83c\uddf7\ud83e\uddc0\ud83c\udf77</p><p class=\"button-wrapper\" data-attrs=\"{&quot;url&quot;:&quot;https://blog.stephenturner.us/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}\" data-component-name=\"ButtonCreateButton\"><a class=\"button primary\" href=\"https://blog.stephenturner.us/subscribe?\"><span>Subscribe now</span></a></p><div><hr></div><p>Chris Lu, et al., in <em>Nature</em>: <strong><a href=\"https://www.nature.com/articles/s41586-026-10265-5\">Towards end-to-end automation of AI research</a></strong>. Sakana AI\u2019s \u201cAI Scientist\u201d pipeline handles the full ML research loop: ideation, literature search, experiment design and execution, paper writing, and automated peer review. One of its manuscripts scored above the acceptance threshold at an ICLR 2025 workshop (which had a 70% acceptance rate, to be fair). Paper quality as judged by their automated reviewer tracks closely with foundation model capability, and with compute budget per paper, which tells you where this is headed even if the current output isn\u2019t threatening anyone\u2019s tenure case. For a quicker summary, read <strong><a href=\"https://sakana.ai/ai-scientist-nature/\">Sakana\u2019s blog post</a></strong>.</p><div class=\"captioned-image-container\"><figure><a class=\"image-link image2 is-viewable-img\" target=\"_blank\" href=\"https://substackcdn.com/image/fetch/$s_!4UvV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png\" data-component-name=\"Image2ToDOM\"><div class=\"image2-inset\"><picture><source type=\"image/webp\" srcset=\"https://substackcdn.com/image/fetch/$s_!4UvV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png 424w, https://substackcdn.com/image/fetch/$s_!4UvV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png 848w, https://substackcdn.com/image/fetch/$s_!4UvV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png 1272w, https://substackcdn.com/image/fetch/$s_!4UvV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png 1456w\" sizes=\"100vw\"><img src=\"https://substackcdn.com/image/fetch/$s_!4UvV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png\" width=\"1456\" height=\"886\" data-attrs=\"{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:886,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:983871,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.stephenturner.us/i/192202894?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" class=\"sizing-normal\" alt=\"\" srcset=\"https://substackcdn.com/image/fetch/$s_!4UvV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png 424w, https://substackcdn.com/image/fetch/$s_!4UvV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png 848w, https://substackcdn.com/image/fetch/$s_!4UvV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png 1272w, https://substackcdn.com/image/fetch/$s_!4UvV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png 1456w\" sizes=\"100vw\" fetchpriority=\"high\"></picture><div class=\"image-link-expand\"><div class=\"pencraft pc-display-flex pc-gap-8 pc-reset\"><button tabindex=\"0\" type=\"button\" class=\"pencraft pc-reset pencraft icon-container restack-image\"><svg role=\"img\" width=\"20\" height=\"20\" viewBox=\"0 0 20 20\" fill=\"none\" stroke-width=\"1.5\" stroke=\"var(--color-fg-primary)\" stroke-linecap=\"round\" stroke-linejoin=\"round\" xmlns=\"http://www.w3.org/2000/svg\"><g><title></title><path d=\"M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882\"></path></g></svg></button><button tabindex=\"0\" type=\"button\" class=\"pencraft pc-reset pencraft icon-container view-image\"><svg xmlns=\"http://www.w3.org/2000/svg\" width=\"20\" height=\"20\" viewBox=\"0 0 24 24\" fill=\"none\" stroke=\"currentColor\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\" class=\"lucide lucide-maximize2 lucide-maximize-2\"><polyline points=\"15 3 21 3 21 9\"></polyline><polyline points=\"9 21 3 21 3 15\"></polyline><line x1=\"21\" x2=\"14\" y1=\"3\" y2=\"10\"></line><line x1=\"3\" x2=\"10\" y1=\"21\" y2=\"14\"></line></svg></button></div></div></div></a><figcaption class=\"image-caption\">Fig. 2 from <a href=\"https://www.nature.com/articles/s41586-026-10265-5\">Lu 2026</a>: Selected sections from a paper generated by The AI Scientist that was accepted via peer review at a top-tier machine learning conference workshop.</figcaption></figure></div><p><em>Counterpoint</em>: <span class=\"mention-wrap\" data-attrs=\"{&quot;name&quot;:&quot;Steven Salzberg&quot;,&quot;id&quot;:154387057,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9e2a53df-22f7-4cc0-a83e-4c32669682c9_144x144.png&quot;,&quot;uuid&quot;:&quot;d0df880f-6af0-4efc-9cf4-f70e2fbce023&quot;}\" data-component-name=\"MentionToDOM\"></span> writes <strong><a href=\"https://stevensalzberg.substack.com/p/ai-is-starting-to-look-like-pseudoscience\">AI badly needs a dose of skepticism</a></strong>. Salzberg goes after DNA foundation models, arguing that their central claim (predict the effects of any mutation from sequence alone) is biologically implausible and largely unfalsifiable, two properties he knows well from years of writing about pseudoscience nonsense (homeopathy, accupuncture). Teams build ever-larger models first, then go looking for problems, which is backwards. The core critique of unfalsifiable prediction claims and <em><a href=\"https://www.nature.com/articles/s41586-026-10265-5\">Nature</a></em><a href=\"https://www.nature.com/articles/s41586-026-10265-5\">\u2019s eagerness to publish them</a> is hard to dismiss. See above.</p><p><span class=\"mention-wrap\" data-attrs=\"{&quot;name&quot;:&quot;Arjun Raj&quot;,&quot;id&quot;:193849277,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:null,&quot;uuid&quot;:&quot;c1acc65f-8cbd-45b8-8d1c-37a26c4a9de2&quot;}\" data-component-name=\"MentionToDOM\"></span>: <strong><a href=\"https://arjunrajlab.substack.com/p/transitioning-to-being-a-pi-in-the\">Transitioning to Being a PI in the Age of AI</a></strong>. A short and honest post about the asymmetry in how faculty and trainees experience the current AI moment in computational biology. Faculty are exhilarated because they\u2019ve spent years developing the skill of evaluating analyses without doing them line by line; trainees are more ambivalent because they\u2019re being asked to make that same transition in months rather than years or decades. </p><p>My SDS colleague Heman Shakeri released full materials for his <strong><a href=\"https://shakeri-lab.github.io/dl-course-site/\">Deep Learning Course</a></strong> here at UVA. A complete, openly licensed (CC BY 4.0) deep learning course from UVA\u2019s School of Data Science, built for the online MSDS program (DS 6050) and public since Fall 2025. The 12-module sequence starts with NumPy-first implementations of MLPs and backpropagation, moves through CNNs, RNNs, encoder-decoder architectures, and the full attention/transformer stack, and finishes with ViTs, LoRA/QLoRA, and generative models including diffusion. Each module has lecture videos, notes, slides, and Colab assignments with unit tests. The <a href=\"https://shakeri-lab.github.io/dl-course-site/syllabus.pdf\">syllabus</a> lays out the pedagogical logic: three phases moving from from-scratch understanding to architectural depth to modern practice. Too much deep learning education lives in disconnected repos and YouTube playlists; having everything in one structured, reusable site with a clear arc is more valuable than any single component on its own.</p><p class=\"button-wrapper\" data-attrs=\"{&quot;url&quot;:&quot;https://blog.stephenturner.us/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}\" data-component-name=\"ButtonCreateButton\"><a class=\"button primary\" href=\"https://blog.stephenturner.us/subscribe?\"><span>Subscribe now</span></a></p><p>Carl Zimmer, NYT: <strong><a href=\"https://www.nytimes.com/2026/03/26/science/biotechnology-pharmaceuticals-eggs.html?unlocked_article_code=1.WVA.g9xY.xi8SRx_pwAC9&amp;smid=url-share\">How to Turn a Chicken Egg Into a Drug Factory</a></strong>. <a href=\"https://www.neionbio.com/\">Neion Bio</a>, a startup that emerged from stealth this week, is engineering chickens whose eggs produce pharmaceutical proteins, potentially replacing the Chinese hamster ovary (CHO) cell lines that currently dominate biologic drug manufacturing. The company claims 3,900 hens could meet global demand for Humira at a fraction of the cost of a CHO facility (Merck just broke ground on a $1B Keytruda plant, for comparison). Sven Bocklandt, Neion's chief scientific officer, was a colleague of mine at Colossal, where we worked on the dire wolf program together. Zimmer's writeup (great as usual) discussesthe history of how CHO cells became the default and why advances in primordial germ cell manipulation are finally making avian biomanufacturing viable.</p><p>New NIH Highlighted Topic: <strong><a href=\"https://grants.nih.gov/funding/find-a-fit-for-your-research/highlighted-topics/54\">Advancing \u201cScience of Science\u201d Research to Understand and Strengthen the Biomedical Research Ecosystem</a></strong>. These are not NOFOs, but descriptions of scientific areas that NIH ICOs are interested in funding through existing parent announcements. This one encourages investigator-initiated applications on the \u201cscience of science,\u201d the study of how the biomedical research ecosystem itself works. Topics include workforce retention, research capacity building, rigor and reproducibility, translation bottlenecks, and the economic returns of research investment.</p><div class=\"captioned-image-container\"><figure><a class=\"image-link image2 is-viewable-img\" target=\"_blank\" href=\"https://substackcdn.com/image/fetch/$s_!31mJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2f26291-34e1-4dbb-a6dc-23147bd6ddb3_1024x678.png\" data-component-name=\"Image2ToDOM\"><div class=\"image2-inset\"><picture><source type=\"image/webp\" srcset=\"https://substackcdn.com/image/fetch/$s_!31mJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2f26291-34e1-4dbb-a6dc-23147bd6ddb3_1024x678.png 424w, https://substackcdn.com/image/fetch/$s_!31mJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2f26291-34e1-4dbb-a6dc-23147bd6ddb3_1024x678.png 848w, https://substackcdn.com/image/fetch/$s_!31mJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2f26291-34e1-4dbb-a6dc-23147bd6ddb3_1024x678.png 1272w, https://substackcdn.com/image/fetch/$s_!31mJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2f26291-34e1-4dbb-a6dc-23147bd6ddb3_1024x678.png 1456w\" sizes=\"100vw\"><img src=\"https://substackcdn.com/image/fetch/$s_!31mJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2f26291-34e1-4dbb-a6dc-23147bd6ddb3_1024x678.png\" width=\"1024\" height=\"678\" data-attrs=\"{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b2f26291-34e1-4dbb-a6dc-23147bd6ddb3_1024x678.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:678,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:170992,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.stephenturner.us/i/192202894?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2f26291-34e1-4dbb-a6dc-23147bd6ddb3_1024x678.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" class=\"sizing-normal\" alt=\"\" srcset=\"https://substackcdn.com/image/fetch/$s_!31mJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2f26291-34e1-4dbb-a6dc-23147bd6ddb3_1024x678.png 424w, https://substackcdn.com/image/fetch/$s_!31mJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2f26291-34e1-4dbb-a6dc-23147bd6ddb3_1024x678.png 848w, https://substackcdn.com/image/fetch/$s_!31mJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2f26291-34e1-4dbb-a6dc-23147bd6ddb3_1024x678.png 1272w, https://substackcdn.com/image/fetch/$s_!31mJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2f26291-34e1-4dbb-a6dc-23147bd6ddb3_1024x678.png 1456w\" sizes=\"100vw\" loading=\"lazy\"></picture><div class=\"image-link-expand\"><div class=\"pencraft pc-display-flex pc-gap-8 pc-reset\"><button tabindex=\"0\" type=\"button\" class=\"pencraft pc-reset pencraft icon-container restack-image\"><svg role=\"img\" width=\"20\" height=\"20\" viewBox=\"0 0 20 20\" fill=\"none\" stroke-width=\"1.5\" stroke=\"var(--color-fg-primary)\" stroke-linecap=\"round\" stroke-linejoin=\"round\" xmlns=\"http://www.w3.org/2000/svg\"><g><title></title><path d=\"M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882\"></path></g></svg></button><button tabindex=\"0\" type=\"button\" class=\"pencraft pc-reset pencraft icon-container view-image\"><svg xmlns=\"http://www.w3.org/2000/svg\" width=\"20\" height=\"20\" viewBox=\"0 0 24 24\" fill=\"none\" stroke=\"currentColor\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\" class=\"lucide lucide-maximize2 lucide-maximize-2\"><polyline points=\"15 3 21 3 21 9\"></polyline><polyline points=\"9 21 3 21 3 15\"></polyline><line x1=\"21\" x2=\"14\" y1=\"3\" y2=\"10\"></line><line x1=\"3\" x2=\"10\" y1=\"21\" y2=\"14\"></line></svg></button></div></div></div></a></figure></div><p>Yet another new NIH Highlighted Topic: <strong><a href=\"https://grants.nih.gov/funding/find-a-fit-for-your-research/highlighted-topics/19\">BRAIN Initiative: Advancing Human Neuroscience and Precision Molecular Therapies for Transformative Treatments</a>. </strong>This one covers the <a href=\"https://braininitiative.nih.gov/\">BRAIN Initiative</a>\u2019s priorities in human neural circuit research, clinical neurotechnology, and precision molecular therapies (optogenetics, chemogenetics). 11 ICOs are listed as participating.</p><div class=\"captioned-image-container\"><figure><a class=\"image-link image2 is-viewable-img\" target=\"_blank\" href=\"https://grants.nih.gov/funding/find-a-fit-for-your-research/highlighted-topics/19\" data-component-name=\"Image2ToDOM\"><div class=\"image2-inset\"><picture><source type=\"image/webp\" srcset=\"https://substackcdn.com/image/fetch/$s_!1Hqo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21e20f9b-608e-4cb7-92ce-8511c6960562_1259x771.png 424w, https://substackcdn.com/image/fetch/$s_!1Hqo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21e20f9b-608e-4cb7-92ce-8511c6960562_1259x771.png 848w, https://substackcdn.com/image/fetch/$s_!1Hqo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21e20f9b-608e-4cb7-92ce-8511c6960562_1259x771.png 1272w, https://substackcdn.com/image/fetch/$s_!1Hqo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21e20f9b-608e-4cb7-92ce-8511c6960562_1259x771.png 1456w\" sizes=\"100vw\"><img src=\"https://substackcdn.com/image/fetch/$s_!1Hqo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21e20f9b-608e-4cb7-92ce-8511c6960562_1259x771.png\" width=\"1259\" height=\"771\" data-attrs=\"{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/21e20f9b-608e-4cb7-92ce-8511c6960562_1259x771.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:771,&quot;width&quot;:1259,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:277881,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://grants.nih.gov/funding/find-a-fit-for-your-research/highlighted-topics/19&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.stephenturner.us/i/192202894?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21e20f9b-608e-4cb7-92ce-8511c6960562_1259x771.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" class=\"sizing-normal\" alt=\"\" srcset=\"https://substackcdn.com/image/fetch/$s_!1Hqo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21e20f9b-608e-4cb7-92ce-8511c6960562_1259x771.png 424w, https://substackcdn.com/image/fetch/$s_!1Hqo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21e20f9b-608e-4cb7-92ce-8511c6960562_1259x771.png 848w, https://substackcdn.com/image/fetch/$s_!1Hqo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21e20f9b-608e-4cb7-92ce-8511c6960562_1259x771.png 1272w, https://substackcdn.com/image/fetch/$s_!1Hqo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21e20f9b-608e-4cb7-92ce-8511c6960562_1259x771.png 1456w\" sizes=\"100vw\" loading=\"lazy\"></picture><div class=\"image-link-expand\"><div class=\"pencraft pc-display-flex pc-gap-8 pc-reset\"><button tabindex=\"0\" type=\"button\" class=\"pencraft pc-reset pencraft icon-container restack-image\"><svg role=\"img\" width=\"20\" height=\"20\" viewBox=\"0 0 20 20\" fill=\"none\" stroke-width=\"1.5\" stroke=\"var(--color-fg-primary)\" stroke-linecap=\"round\" stroke-linejoin=\"round\" xmlns=\"http://www.w3.org/2000/svg\"><g><title></title><path d=\"M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882\"></path></g></svg></button><button tabindex=\"0\" type=\"button\" class=\"pencraft pc-reset pencraft icon-container view-image\"><svg xmlns=\"http://www.w3.org/2000/svg\" width=\"20\" height=\"20\" viewBox=\"0 0 24 24\" fill=\"none\" stroke=\"currentColor\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\" class=\"lucide lucide-maximize2 lucide-maximize-2\"><polyline points=\"15 3 21 3 21 9\"></polyline><polyline points=\"9 21 3 21 3 15\"></polyline><line x1=\"21\" x2=\"14\" y1=\"3\" y2=\"10\"></line><line x1=\"3\" x2=\"10\" y1=\"21\" y2=\"14\"></line></svg></button></div></div></div></a></figure></div><p>More NIH news: <strong><a href=\"https://grants.nih.gov/grants/guide/notice-files/NOT-OD-26-064.html\">NOT-OD-26-064: Update of NIH Late Application Submission Policy and End of Continuous Submission</a></strong>. NIH is ending its Continuous Submission policy, which let PIs serving on review panels submit applications outside normal deadlines. Effective for due dates on or after May 25, 2026.</p><p><strong><a href=\"https://content.govdelivery.com/accounts/USNSF/bulletins/410a918\">TIP Leadership Update</a></strong>. NSF's Erwin Gianchandani announces the retirement of Gracie Narcho, who served as deputy assistant director and directorate head for the Technology, Innovation and Partnerships directorate since its founding. Gianchandani credits Narcho with co-authoring the vision that became TIP before it had authorizing legislation, and with launching programs like the NSF Regional Innovation Engines and the I-Corps Hubs during a career spanning three decades and multiple NSF directorates.</p><p>Austin Dickey: <strong><a href=\"https://positron.posit.co/blog/posts/2026-03-31-python-type-checkers/\">How we chose Positron's Python type checker</a></strong>. Posit evaluated 4 open-source Python language servers (Pyrefly, ty, Basedpyright, Zuban) across features, correctness, performance, and ecosystem health, then chose Meta's Pyrefly as Positron's default. The most interesting section is the comparison of type-checking philosophies: ty follows a \"gradual guarantee\" where removing a type annotation never introduces an error, while Pyrefly infers types aggressively even in untyped code. Good overview of a space that's moving fast.</p><div class=\"captioned-image-container\"><figure><a class=\"image-link image2 is-viewable-img\" target=\"_blank\" href=\"https://substackcdn.com/image/fetch/$s_!BPpJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53ecc9a3-03c1-4865-b5ac-c153ac9e2275_816x255.png\" data-component-name=\"Image2ToDOM\"><div class=\"image2-inset\"><picture><source type=\"image/webp\" srcset=\"https://substackcdn.com/image/fetch/$s_!BPpJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53ecc9a3-03c1-4865-b5ac-c153ac9e2275_816x255.png 424w, https://substackcdn.com/image/fetch/$s_!BPpJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53ecc9a3-03c1-4865-b5ac-c153ac9e2275_816x255.png 848w, https://substackcdn.com/image/fetch/$s_!BPpJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53ecc9a3-03c1-4865-b5ac-c153ac9e2275_816x255.png 1272w, https://substackcdn.com/image/fetch/$s_!BPpJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53ecc9a3-03c1-4865-b5ac-c153ac9e2275_816x255.png 1456w\" sizes=\"100vw\"><img src=\"https://substackcdn.com/image/fetch/$s_!BPpJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53ecc9a3-03c1-4865-b5ac-c153ac9e2275_816x255.png\" width=\"816\" height=\"255\" data-attrs=\"{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/53ecc9a3-03c1-4865-b5ac-c153ac9e2275_816x255.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:255,&quot;width&quot;:816,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:33931,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.stephenturner.us/i/192202894?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53ecc9a3-03c1-4865-b5ac-c153ac9e2275_816x255.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" class=\"sizing-normal\" alt=\"\" srcset=\"https://substackcdn.com/image/fetch/$s_!BPpJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53ecc9a3-03c1-4865-b5ac-c153ac9e2275_816x255.png 424w, https://substackcdn.com/image/fetch/$s_!BPpJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53ecc9a3-03c1-4865-b5ac-c153ac9e2275_816x255.png 848w, https://substackcdn.com/image/fetch/$s_!BPpJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53ecc9a3-03c1-4865-b5ac-c153ac9e2275_816x255.png 1272w, https://substackcdn.com/image/fetch/$s_!BPpJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53ecc9a3-03c1-4865-b5ac-c153ac9e2275_816x255.png 1456w\" sizes=\"100vw\" loading=\"lazy\"></picture><div class=\"image-link-expand\"><div class=\"pencraft pc-display-flex pc-gap-8 pc-reset\"><button tabindex=\"0\" type=\"button\" class=\"pencraft pc-reset pencraft icon-container restack-image\"><svg role=\"img\" width=\"20\" height=\"20\" viewBox=\"0 0 20 20\" fill=\"none\" stroke-width=\"1.5\" stroke=\"var(--color-fg-primary)\" stroke-linecap=\"round\" stroke-linejoin=\"round\" xmlns=\"http://www.w3.org/2000/svg\"><g><title></title><path d=\"M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882\"></path></g></svg></button><button tabindex=\"0\" type=\"button\" class=\"pencraft pc-reset pencraft icon-container view-image\"><svg xmlns=\"http://www.w3.org/2000/svg\" width=\"20\" height=\"20\" viewBox=\"0 0 24 24\" fill=\"none\" stroke=\"currentColor\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\" class=\"lucide lucide-maximize2 lucide-maximize-2\"><polyline points=\"15 3 21 3 21 9\"></polyline><polyline points=\"9 21 3 21 3 15\"></polyline><line x1=\"21\" x2=\"14\" y1=\"3\" y2=\"10\"></line><line x1=\"3\" x2=\"10\" y1=\"21\" y2=\"14\"></line></svg></button></div></div></div></a></figure></div><p>Mario Zechner: <strong><a href=\"https://mariozechner.at/posts/2026-03-25-thoughts-on-slowing-the-fuck-down/\">Thoughts on slowing the f*ck down</a></strong>. A year into production use of coding agents, Zechner argues that the compounding of small errors at machine speed, combined with agents\u2019 inability to learn from mistakes and their low-recall search over large codebases, is producing unmaintainable messes far faster than human teams ever could. The prescription: treat agents as task-level tools with humans as the quality gate, write your architecture by hand, and set deliberate limits on how much generated code you accept per day.</p><p>Theo Roe: <strong><a href=\"https://www.jumpingrivers.com/blog/why-learning-r-is-a-good-career-move-in-2026/\">Why Learning R is a Good Career Move in 2026</a></strong>. A short, beginner-oriented pitch from Jumping Rivers (an R training company, so calibrate accordingly) making the case for R as a first language for data work. Nothing new for experienced practitioners, but a reasonable overview of where R still has a strong foothold: healthcare, pharma, government, academic research, and anywhere visualization and reproducible reporting are central. The honest caveat at the end is useful: if you want software engineering or large-scale production systems, you probably need Python.</p><p><span class=\"mention-wrap\" data-attrs=\"{&quot;name&quot;:&quot;Matt Lubin&quot;,&quot;id&quot;:397303631,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/924242ef-2a2d-4a0c-9fac-a506e969de5c_967x967.png&quot;,&quot;uuid&quot;:&quot;cb043d8d-400f-4e38-a6b3-2f76b3ae62c5&quot;}\" data-component-name=\"MentionToDOM\"></span> at <span class=\"mention-wrap\" data-attrs=\"{&quot;name&quot;:&quot;Bio-Security Stack&quot;,&quot;id&quot;:6407314,&quot;type&quot;:&quot;pub&quot;,&quot;url&quot;:&quot;https://open.substack.com/pub/mattsbiodefense&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d1f148d3-2c56-4650-b623-0f42ff4cbd44_1280x1280.png&quot;,&quot;uuid&quot;:&quot;9b422090-35bf-4bdc-8d8c-5e3fa295d30a&quot;}\" data-component-name=\"MentionToDOM\"></span>: <strong><a href=\"https://mattsbiodefense.substack.com/p/five-things-march-29-2026\">Five Things: March 29, 2026</a></strong>: Anthropic temporary win, scheming, biodesign by LLM, White House advisors, Anthropic security.</p><p>Ryan Layer: <strong><a href=\"https://ryanlayerlab.github.io/layerlab/2026/03/23/What-Do-I-Teach-Now.html\">What do I teach now?</a></strong>. Ryan has taught Software Engineering for Scientists at CU Boulder since 2019, and coding agents have forced him to rethink the whole course. In science, code is the method, so vibe coding is a reproducibility problem in addition to being a quality problem. He\u2019s now rebuilding the class around open questions like who audits AI-generated analyses in ten years if no one learns to build from scratch.</p><blockquote><p>The thought of my students building software by prompting and accepting the output without reading the code keeps me up at night. [\u2026] For science, where the code is the method, vibe coding is not an option.</p></blockquote><p><span class=\"mention-wrap\" data-attrs=\"{&quot;name&quot;:&quot;Claus Wilke&quot;,&quot;id&quot;:64064132,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f86ed0b8-faec-478f-9afa-6a59f2c148fc_2000x2000.png&quot;,&quot;uuid&quot;:&quot;fea6e5fe-621b-4456-9b7f-ac55426724d2&quot;}\" data-component-name=\"MentionToDOM\"></span> at <span class=\"mention-wrap\" data-attrs=\"{&quot;name&quot;:&quot;Genes, Minds, Machines&quot;,&quot;id&quot;:5419410,&quot;type&quot;:&quot;pub&quot;,&quot;url&quot;:&quot;https://open.substack.com/pub/clauswilke&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4b85fecd-da20-4614-b9b3-54f277cfa6bd_982x982.png&quot;,&quot;uuid&quot;:&quot;80ee765e-a2f6-4d86-8422-9d30972920ef&quot;}\" data-component-name=\"MentionToDOM\"></span>: <strong><a href=\"https://blog.genesmindsmachines.com/p/creating-reproducible-data-analysis\">Creating reproducible data analysis pipelines</a></strong>. A case against the \u201crun everything from raw data\u201d ideal of reproducibility. Claus argues that intermediate CSV files saved right before plotting are more durable than any end-to-end pipeline: pipelines break, Docker images rot,<a class=\"footnote-anchor\" data-component-name=\"FootnoteAnchorToDOM\" id=\"footnote-anchor-1\" href=\"#footnote-1\" target=\"_self\">1</a> and students (and PIs!) lose afternoons rerunning everything to swap a violin plot for a boxplot.</p><p><strong><a href=\"https://ropensci.org/blog/2026/03/30/news-mars-2026/\">rOpenSci News Digest, March 2026</a></strong>: dev guide, champions program, software review and usage of AI tools.</p><p>Joe Rickert: <strong><a href=\"https://rworks.dev/posts/Feb-2026-Top40/\">February 2026 Top 40 New CRAN Packages</a></strong>: AI, machine learning, biology, medical applications, physics, Buddhism, statistics, climate science, computational methods, data, surveys, ecology, time series, epidemiology, utilities, genomics, and visualization. </p><p><strong><a href=\"https://rweekly.org/2026-W14.html\">R Weekly 2026-W14</a>:</strong> ggauto, alt text, scientific coffee.</p><p class=\"button-wrapper\" data-attrs=\"{&quot;url&quot;:&quot;https://blog.stephenturner.us/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}\" data-component-name=\"ButtonCreateButton\"><a class=\"button primary\" href=\"https://blog.stephenturner.us/subscribe?\"><span>Subscribe now</span></a></p><p>Max Kuhn: <strong><a href=\"https://tidyverse.org/blog/2026/03/tabpfn-0-1-0/\">tabpfn 0.1.0</a></strong>. An R interface (via reticulate) to TabPFN, a pre-trained deep learning model for tabular prediction from PriorLabs (I wrote a <a href=\"https://blog.stephenturner.us/i/156727044/accurate-predictions-on-small-data-with-a-tabular-foundation-model\">this short summary of TabPFN</a> last year). The model was trained entirely on synthetic data generated from complex graph models simulating correlation structures, skewness, missing data, interactions, and more. No fitting happens on your data; your training set primes an attention mechanism via in-context learning.</p><p><span class=\"mention-wrap\" data-attrs=\"{&quot;name&quot;:&quot;Elizabeth Ginexi&quot;,&quot;id&quot;:129927491,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/287d0a29-48a9-4913-81f3-0e8bd4a3dc73_1346x1346.jpeg&quot;,&quot;uuid&quot;:&quot;667386a9-1e8c-45c9-82f1-a9afc9c0a70a&quot;}\" data-component-name=\"MentionToDOM\"></span>: <strong><a href=\"https://elizabethginexi.substack.com/p/inside-the-nih-forecast-graveyard\">Inside the NIH Forecast Graveyard</a></strong>. An accounting of NIH funding opportunities that were announced on grants.gov and then never published. Of 336 open forecasts, 205 have passed their promised posting dates with no explanation. The first wave of cancellations in April 2025 was keyword-driven (DEI, HIV, health disparities), but the later waves and the larger mass of silently expiring forecasts hit basic science, clinical infrastructure, and congressionally mandated programs like the BRAIN Initiative and Gabriella Miller Kids First. Ginexi, a former NIH insider, makes the dataset available for anyone to check.</p><p>Niko McCarty: <strong><a href=\"https://nikomc.com/2026/04/01/optogenetics-serendipity/\">Many Great Inventions Weren\u2019t Made by \u201cSerendipity\u201d</a></strong>. Niko uses <a href=\"https://en.wikipedia.org/wiki/Optogenetics\">optogenetics</a> as the central case for a broader argument: the breakthroughs we narrate as lucky accidents were usually preceded by years of deliberate preparation and systematic enumeration of possible solutions. </p><p><strong>New papers &amp; preprints:</strong></p><ul><li><p><a href=\"https://www.nature.com/articles/s41586-026-10265-5\">Towards end-to-end automation of AI research</a></p></li><li><p><a href=\"https://academic.oup.com/bib/article/27/2/bbag131/8553189\">Toward next-generation machine learning and deep learning for spatial omics</a></p></li><li><p><a href=\"https://rdcu.be/faCkJ\">High-resolution metagenome assembly for modern long reads with myloasm</a></p></li><li><p><a href=\"https://www.nejm.org/doi/full/10.1056/NEJMp2516973\">The Age Illusion \u2014 Limitations of Chronologic Age in Medicine</a></p></li><li><p><a href=\"https://rdcu.be/faJsm\">Accelerating coral assisted evolution to keep pace with climate change</a></p></li><li><p><a href=\"https://rdcu.be/faNfU\">SNP calling, haplotype phasing and allele-specific analysis with long RNA-seq reads</a></p></li><li><p><a href=\"https://academic.oup.com/cid/advance-article/doi/10.1093/cid/ciag034/8540088\">State AIDS Drug Assistance Programs\u2019 Contribution to the US Viral Suppression, 2015\u20132022</a> </p></li><li><p><a href=\"https://www.nature.com/articles/s41592-026-03047-4\">AlphaFold as a prior: experimental structure determination conditioned on a pretrained neural network</a></p></li></ul><p class=\"button-wrapper\" data-attrs=\"{&quot;url&quot;:&quot;https://blog.stephenturner.us/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}\" data-component-name=\"ButtonCreateButton\"><a class=\"button primary\" href=\"https://blog.stephenturner.us/subscribe?\"><span>Subscribe now</span></a></p><div class=\"footnote\" data-component-name=\"FootnoteToDOM\"><a id=\"footnote-1\" href=\"#footnote-anchor-1\" class=\"footnote-number\" contenteditable=\"false\" target=\"_self\">1</a><div class=\"footnote-content\"><p>Paper on this topic coming soon. Stay tuned.</p><p></p></div></div>","doi":"https://doi.org/10.59350/fd6cm-etd59","funding_references":null,"guid":"192202894","id":"883fde4c-ea35-470d-9f1b-22b8ac1b4c84","image":"https://substackcdn.com/image/fetch/$s_!4UvV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png","indexed":true,"indexed_at":1775119052,"language":"en","parent_doi":null,"published_at":1775118358,"reference":[],"registered_at":0,"relationships":[],"rid":"gczhc-r8q91","status":"active","summary":"AI automating AI research, AI + being PI, DL course, Neion Bio, NIH highlighted topics, TIP, Python type checking in Positron, R updates, biosecurity, NIH forecast graveyard, serendipity, new papers.","tags":["Papers","R ","AI","Python"],"title":"Weekly Recap (April 2, 2026)","updated_at":1775118358,"url":"https://blog.stephenturner.us/p/weekly-recap-april-2-2026","version":"v1"}},{"document":{"abstract":"Since our founding in 2009, DataCite\u2019s work has been guided by a singular and shared commitment to building and sustaining open infrastructure that everyone can participate in and benefit from. We are a community where all research organizations can belong and where all research outputs, resources, and activities can be shared, discovered, and connected.","archive_url":null,"authors":[{"contributor_roles":[],"name":"DataCite Staff"}],"blog":{"archive_collection":23763,"archive_host":null,"archive_prefix":null,"archive_timestamps":null,"authors":[{"name":"DataCite Staff"}],"canonical_url":null,"category":"computerAndInformationSciences","community_id":"916f4925-a9f6-4b4d-b823-c769ef054f15","created_at":1733579959,"current_feed_url":null,"description":"Connecting Research, Advancing Knowledge","doi_as_guid":false,"favicon":"https://rogue-scholar.org/api/communities/916f4925-a9f6-4b4d-b823-c769ef054f15/logo","feed_format":"application/atom+xml","feed_url":"https://datacite.org/blog/feed/atom/","filter":null,"funding":null,"generator":"WordPress","generator_raw":"WordPress","home_page_url":"https://datacite.org/","id":"127eb888-8cbe-4afc-a6f8-b58adffec39f","indexed":true,"issn":null,"language":"en","license":"https://creativecommons.org/licenses/by/4.0/legalcode","mastodon":null,"prefix":null,"registered_at":0,"relative_url":null,"ror":null,"secure":true,"slug":"datacite","status":"active","subfield":"1710","subfield_validated":null,"title":"DataCite Blog - DataCite","updated_at":1775288946.721507,"use_api":true,"use_mastodon":false,"user_id":"dead81b3-8a8b-45c9-85fe-f01bb3948c77"},"blog_name":"DataCite Blog - DataCite","blog_slug":"datacite","content_html":"\n<p>Since our founding in 2009, DataCite\u2019s work has been guided by a singular and shared commitment to building and sustaining open infrastructure that everyone can participate in and benefit from. We are a community where all research organizations can belong and where all research outputs, resources, and activities can be shared, discovered, and connected.&nbsp;</p>\n\n\n\n<p>We have followed that founding principle while developing programs and services that address ongoing and emerging use cases, expand global access, and support long-term sustainability. And we\u2019ve taken steps over the years to adapt our membership model alongside these developments, in consultation with our members, Executive Board, and broader community.&nbsp;</p>\n\n\n\n<p>We are now taking the next step forward in this journey.</p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"></div>\n\n\n\n<h2 class=\"wp-block-heading\">What&#8217;s changing</h2>\n\n\n\n<p>Starting this month, we\u2019re introducing key updates to our <a href=\"https://datacite.org/fees\" target=\"_blank\" rel=\"noreferrer noopener\">membership fee structure</a> to re-align with the guiding values behind our origins and our vision for the future.</p>\n\n\n\n<p>We\u2019re making a deliberate shift from a transactional model based on DOI registration quantities to a collective funding model focused on supporting shared open infrastructure. This means that DataCite&#8217;s standard fee structure will no longer include per-DOI fees or fees based on DOI quantities.&nbsp;</p>\n\n\n\n<p>As part of this change, we\u2019re also simplifying how fees are applied and adjusting costs based on <a href=\"https://fees.datacite.org/countries\" target=\"_blank\" rel=\"noreferrer noopener\">country-level economic indicators</a> to achieve a more balanced distribution across member organizations.</p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"></div>\n\n\n\n<h2 class=\"wp-block-heading\">Why this matters</h2>\n\n\n\n<p>DataCite has always been focused on community-driven infrastructure, and we&#8217;ve never been just about DOIs. Moving away from a DOI-centric pricing structure removes disincentives to making all outputs and activities broadly accessible. It allows us to shift the focus from the cost of a single DOI to the potential that can be achieved through rich metadata, lasting connections, and long-term stewardship.&nbsp;</p>\n\n\n\n<p>A simpler and more equitable fee model makes it easier for organizations to contribute to and benefit from shared open infrastructure. This isn\u2019t just about inclusion. It\u2019s also about investing in the quality and completeness of the global research record. Our infrastructure and our metadata stores become more valuable when they are used by and available for everyone.&nbsp;</p>\n\n\n\n<p>We have always supported multiple pathways to participation and multiple ways to use our infrastructure. These updates to the membership fee structure continue to broaden pathways of participation and advance DataCite\u2019s vision of shared ownership, where all organizations can engage in a way that works for them, whether that means accessing services directly, participating in a consortium to share costs and engage in communities of practice, or investing funds in DataCite\u2019s mission.&nbsp;</p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"></div>\n\n\n\n<h2 class=\"wp-block-heading\">Why now</h2>\n\n\n\n<p>DataCite metadata and metadata retrieval tools have always been freely and openly available to anyone. As a membership association, we sustain our operations through fees additional member-only services and to participate in DataCite governance.\u00a0These fees are determined by the membership and Executive Board according to our <a href=\"https://datacite.org/wp-content/uploads/2023/06/Statutes_26April2022.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">statutes</a>, and are designed to support cost recovery and long-term sustainability of DataCite infrastructure while ensuring equitable global access to DataCite membership and services. The fee structure has evolved over the years, and was last updated in 2020.</p>\n\n\n\n<p>As the DataCite community has continued to grow, so has the scale and diversity of how and where DataCite infrastructure is used. A fee model tied closely to DOI volume no longer reflects the full meaning of participation, nor does it support the broadest possible engagement. At the same time, there is increasing recognition across the research ecosystem that shared infrastructure requires shared investment. Shifting from a transactional model to a collective one positions DataCite to more tightly align sustainability with mission.</p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"></div>\n\n\n\n<h2 class=\"wp-block-heading\">What remains constant</h2>\n\n\n\n<p>While our fee structure is evolving, DataCite&#8217;s services, governance model, and commitment to open infrastructure remain constant. Our membership program, statutes, and fees will continue to be shaped by our General Assembly and Executive Board, while we continue to support existing members in achieving their goals and engage with new organizations joining the community through the pathway that best meets their needs.&nbsp;</p>\n\n\n\n<p>If you are not yet part of the DataCite member community and would like to learn more about <a href=\"https://datacite.org/become-a-member\" target=\"_blank\" rel=\"noreferrer noopener\">membership pathways and benefits</a>, we invite you to <a href=\"mailto:support@datacite.org\" target=\"_blank\" rel=\"noreferrer noopener\">contact our community team</a> and join our <a href=\"https://datacite.org/event/datacite-membership-essentials/\" target=\"_blank\" rel=\"noreferrer noopener\">open community webinar</a> next month. If you\u2019re ready to get started right now, you can submit a <a href=\"https://datacite.org/membership-inquiry\" target=\"_blank\" rel=\"noreferrer noopener\">membership inquiry</a>.\u00a0</p>\n\n\n\n<p>We look forward to welcoming more organizations into the DataCite community, and to continuing to build open research infrastructure together.</p>\n\n\n\n<p></p>\n","doi":"https://doi.org/10.5438/gc07-ah64","funding_references":null,"guid":"https://datacite.org/?p=14888","id":"fc9f3941-c9ae-4394-b28c-00785f51da5d","image":"https://datacite.org/wp-content/uploads/2026/04/Datacite_Social_Media_Blog_post_banner_DataCite_fee_update_2.png","indexed":true,"indexed_at":1775197725,"language":"en","parent_doi":null,"published_at":1775111528,"reference":[],"registered_at":0,"relationships":[],"rid":"aftd5-dfw64","status":"active","summary":"Since our founding in 2009, DataCite\u2019s work has been guided by a singular and shared commitment to building and sustaining open infrastructure that everyone can participate in and benefit from. We are a community where all research organizations can belong and where all research outputs, resources, and activities can be shared, discovered, and connected.","tags":["Strategy"],"title":"A New Membership Model for a More Equitable DataCite","updated_at":1775143649,"url":"https://datacite.org/blog/a-new-membership-model-for-a-more-equitable-datacite/","version":"v1"}}],"items":[{"abstract":"Back in 2010, I wrote about early artistic depictions of Brachiosaurus (including Giraffatitan). There, I wrote of the iconic mount MB.R.2181 (then HMN S II): When the mount was completed, shortly before the start of World War II, it was unveiled against a backdrop of Nazi banners.","archive_url":null,"authors":[{"affiliation":[{"id":"https://ror.org/0524sp257","name":"University of Bristol"}],"contributor_roles":[],"family":"Taylor","given":"Mike","url":"https://orcid.org/0000-0002-1003-5675"}],"blog":{"archive_collection":22153,"archive_host":null,"archive_prefix":"https://wayback.archive-it.org/22153/20231105213934/","archive_timestamps":null,"authors":[{"name":"Mike Taylor"}],"canonical_url":null,"category":"earthAndRelatedEnvironmentalSciences","community_id":"0e13541f-417e-46c0-a859-65927249df72","created_at":1675209600,"current_feed_url":null,"description":"SV-POW!  ...  All sauropod vertebrae, except when we're talking about Open Access. ISSN 3033-3695","doi_as_guid":false,"favicon":null,"feed_format":"application/atom+xml","feed_url":"https://svpow.com/feed/atom/","filter":null,"funding":null,"generator":"WordPress.com","generator_raw":"WordPress.com","home_page_url":"https://svpow.com","id":"c6cbbd2e-4675-4680-8a3f-784388009821","indexed":false,"issn":"3033-3695","language":"en","license":"https://creativecommons.org/licenses/by/4.0/legalcode","mastodon":null,"prefix":"10.59350","registered_at":1729882329,"relative_url":null,"ror":null,"secure":true,"slug":"svpow","status":"active","subfield":"1911","subfield_validated":true,"title":"Sauropod Vertebra Picture of the Week","updated_at":1775289120.302266,"use_api":true,"use_mastodon":false,"user_id":"04d03585-c8bb-40f2-9619-5076a5e0aed2"},"blog_name":"Sauropod Vertebra Picture of the Week","blog_slug":"svpow","content_html":"<p>Back in 2010, I wrote about <a href=\"https://svpow.com/2010/04/08/early-brachiosaurus-art/\">early artistic depictions of <em>Brachiosaurus</em> (including <em>Giraffatitan</em>)</a>. There, I wrote of the iconic mount MB.R.2181 (then HMN S II):</p>\n<blockquote><p>When the mount was completed, shortly before the start of World War II, it was unveiled against a backdrop of Nazi banners. I have not been able to find a photograph of this (and if anyone has one, please do let me know), but I do have this drawing of the event, taken from an Italian magazine and dated 23rd December 1937.</p></blockquote>\n<p>(See that post for the drawing.)</p>\n<p>Recently the historian Ilja Nieuwland (one of the authors <a href=\"https://svpow.com/papers-by-sv-powsketeers/taylor-et-al-2025-on-the-composition-on-the-carnegie-diplodocus/\">on our recent paper on the Carnegie <em>Diplodocus</em></a>, Taylor et al. 2025) sent me two photos of this unveiling, again with swastikas prominent in the background:</p>\n<div data-shortcode=\"caption\" id=\"attachment_25273\" style=\"width: 490px\" class=\"wp-caption alignnone\"><a href=\"https://svpow.wordpress.com/wp-content/uploads/2026/04/haagsche-courant-1937-brachio.jpeg\"><img aria-describedby=\"caption-attachment-25273\" data-attachment-id=\"25273\" data-permalink=\"http://svpow.com/2026/04/03/the-nazi-sauropod-giraffatitan-brachiosaurus-brancai-in-1937/haagsche-courant-1937-brachio/\" data-orig-file=\"https://svpow.wordpress.com/wp-content/uploads/2026/04/haagsche-courant-1937-brachio.jpeg\" data-orig-size=\"1398,2217\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;,&quot;alt&quot;:&quot;&quot;}\" data-image-title=\"Haagsche Courant 1937 &amp;#8211; Brachio\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https://svpow.wordpress.com/wp-content/uploads/2026/04/haagsche-courant-1937-brachio.jpeg?w=646\" loading=\"lazy\" class=\"size-full wp-image-25273\" src=\"https://svpow.wordpress.com/wp-content/uploads/2026/04/haagsche-courant-1937-brachio.jpeg\" alt=\"\" width=\"480\" height=\"761\" srcset=\"https://svpow.wordpress.com/wp-content/uploads/2026/04/haagsche-courant-1937-brachio.jpeg?w=480&amp;h=761 480w, https://svpow.wordpress.com/wp-content/uploads/2026/04/haagsche-courant-1937-brachio.jpeg?w=960&amp;h=1522 960w, https://svpow.wordpress.com/wp-content/uploads/2026/04/haagsche-courant-1937-brachio.jpeg?w=95&amp;h=150 95w, https://svpow.wordpress.com/wp-content/uploads/2026/04/haagsche-courant-1937-brachio.jpeg?w=189&amp;h=300 189w, https://svpow.wordpress.com/wp-content/uploads/2026/04/haagsche-courant-1937-brachio.jpeg?w=768&amp;h=1218 768w, https://svpow.wordpress.com/wp-content/uploads/2026/04/haagsche-courant-1937-brachio.jpeg?w=646&amp;h=1024 646w\" sizes=\"(max-width: 480px) 100vw, 480px\" /></a><p id=\"caption-attachment-25273\" class=\"wp-caption-text\"><strong>EEN MOOIE AANSWINST</strong> \u2014 voor het museum van natuurlijke historie te Berlijn: het skelet van een Brachiosaurus, het grooste voorwereld-lijke landdier ooit gevonden. Het skelet is 11.87 meter hoog.</p></div>\n<p>Surprisingly, perhaps, this is in a Dutch newspaper, <em>Haagsche Courant</em> of 14 December 1937. The caption, which is in Dutch, reads: &#8220;A GREAT ADDITION \u2014 to the Museum of Natural History in Berlin: the skeleton of a Brachiosaurus, the largest prehistoric land animal ever found. The skeleton is 11.87 meters tall.&#8221; Ilja helpfully supplied <a href=\"https://svpow.wordpress.com/wp-content/uploads/2026/04/haagsche-courant-1937-brachio.pdf\">a PDF containing the front page of the newspaper and the page that contained this image</a>.</p>\n<p>The second is similar, but from a different angle that highlights the human skeleton that was placed down by the forefeet for scale:</p>\n<div data-shortcode=\"caption\" id=\"attachment_25277\" style=\"width: 490px\" class=\"wp-caption alignnone\"><a href=\"https://svpow.wordpress.com/wp-content/uploads/2026/04/maasbode-27-nov-1937-p2.jpeg\"><img aria-describedby=\"caption-attachment-25277\" data-attachment-id=\"25277\" data-permalink=\"http://svpow.com/2026/04/03/the-nazi-sauropod-giraffatitan-brachiosaurus-brancai-in-1937/maasbode-27-nov-1937-p2/\" data-orig-file=\"https://svpow.wordpress.com/wp-content/uploads/2026/04/maasbode-27-nov-1937-p2.jpeg\" data-orig-size=\"678,1280\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;,&quot;alt&quot;:&quot;&quot;}\" data-image-title=\"Maasbode 27 nov 1937-p2\" data-image-description=\"\" data-image-caption=\"&lt;p&gt;EEN PRAEHISTORISCH MONSTER werd ongeveer zeven jaar geleden door een Duitsch geleerde in Oost-Africa ontdekt. Na moeizamen arbeid is men er in geslaagd het skelet van den brachiosaurus op te bouwen, dat in &amp;#8216;n museum te Berlijn is opgesteld&lt;/p&gt;\n\" data-large-file=\"https://svpow.wordpress.com/wp-content/uploads/2026/04/maasbode-27-nov-1937-p2.jpeg?w=542\" loading=\"lazy\" class=\"size-full wp-image-25277\" src=\"https://svpow.wordpress.com/wp-content/uploads/2026/04/maasbode-27-nov-1937-p2.jpeg\" alt=\"\" width=\"480\" height=\"906\" srcset=\"https://svpow.wordpress.com/wp-content/uploads/2026/04/maasbode-27-nov-1937-p2.jpeg?w=480&amp;h=906 480w, https://svpow.wordpress.com/wp-content/uploads/2026/04/maasbode-27-nov-1937-p2.jpeg?w=79&amp;h=150 79w, https://svpow.wordpress.com/wp-content/uploads/2026/04/maasbode-27-nov-1937-p2.jpeg?w=159&amp;h=300 159w, https://svpow.wordpress.com/wp-content/uploads/2026/04/maasbode-27-nov-1937-p2.jpeg 678w\" sizes=\"(max-width: 480px) 100vw, 480px\" /></a><p id=\"caption-attachment-25277\" class=\"wp-caption-text\">EEN PRAEHISTORISCH MONSTER werd ongeveer zeven jaar geleden door een Duitsch geleerde in Oost-Africa ontdekt. Na moeizamen arbeid is men er in geslaagd het skelet van den brachiosaurus op te bouwen, dat in &#8216;n museum te Berlijn is opgesteld</p></div>\n<p>Again, this is in Dutch, and the filename suggests that the source is a newspaper called <em>Maasbode</em> for 27 November 1937. The caption reads: &#8220;A PREHISTORIC MONSTER was discovered about seven years ago by a German scientist in East Africa. After arduous work, they succeeded in reconstructing the skeleton of the brachiosaurus, which is on display in a museum in Berlin.&#8221;</p>\n<p>I don&#8217;t know about you, but I feel it as a gut-punch when I see this animal, <a href=\"https://svpow.com/2024/11/17/behold-the-glory-of-the-lego-giraffatitan/\">which I deeply love</a>, against a backdrop of Nazi symbols. Gerhard Maier&#8217;s usually very detailed book <em>African Dinosaurs Unearthed</em> (Maier 2003) is uncharacteristically terse about this, saying of the unveiling only this (on page 267):</p>\n<blockquote><p>With swastika banners hanging from the walls as a backdrop, the exciting new exhibit opened in August 1937. A curious public, especially schoolchildren, formed long lines, waiting to see Berlin&#8217;s latest attraction.</p></blockquote>\n<p>I don&#8217;t know to what extent the rising Nazi regime used the brachiosaur mount as a PR event, an advertisement for their national superiority or what have you. (Has anyone written about this?)</p>\n<p>I was thinking about this because I get a daily notification of Wikipedia&#8217;s most-viewed article of the previous 24 hours. In recent times, it&#8217;s mostly been some article about bad news, or a person causing bad news. But a couple of days ago, it was <a href=\"https://en.wikipedia.org/wiki/Artemis_II\">Artemis II</a>, and I remarked on Mastodon how nice it was, just for one day, to have good news as the most read article. And someone quickly replied &#8220;I love space exploration, but having the Trump administration take credit for something like this is the last thing we need.&#8221;</p>\n<p>But here&#8217;s the thing. The Berlin brachiosaur mount has long outlived the Nazis (or at least the OG Nazis). And whatever the current moon mission achieves will long outlive the Trump administration.</p>\n<p>We don&#8217;t really write about politics on this blog. I like that about it, and I&#8217;m guessing most readers do as well. I&#8217;m not going to change that \u2014 the Web is\u00a0<em>full</em> of places to go and read about politics. But I do like the sense that scientific achievements are outside of the particular people who happen to be in power when they happen. The Berlin brachiosaur, and the Artemis II moon mission, are achievements for all humankind.</p>\n<h1>References</h1>\n<ul>\n<li>Maier, Gerhard. 2003. <em>African Dinosaurs Unearthed: The Tendaguru Expeditions</em>. Indiana University Press, Bloomington and Indianapolis, 380 p.</li>\n<li><a href=\"https://www.miketaylor.org.uk/dino/pubs/taylor-et-al-2025/TaylorEtAl2025--history-and-composition-of-the-Carnegie-Diplodocus.pdf\">Taylor, Michael P., Amy C. Henrici, Linsly J. Church, Ilja Nieuwland and Matthew C. Lamanna. 2025. <em>The history and composition of the Carnegie </em>Diplodocus. <em>Annals of the Carnegie Museum</em> <strong>91(1)</strong>:55\u201391. doi:10.2992/007.091.0104</a></li>\n</ul>\n<p>&nbsp;</p>\n<hr />\n<p><a href=\"https://doi.org/10.59350/9d5gk-fm764\">doi:10.59350/9d5gk-fm764</a></p>\n","doi":"https://doi.org/10.59350/9d5gk-fm764","funding_references":null,"guid":"https://svpow.com/?p=25267","id":"108db357-8eeb-461e-91b1-1bc0f0e1131f","image":"https://svpow.wordpress.com/wp-content/uploads/2026/04/haagsche-courant-1937-brachio.jpeg","indexed":true,"indexed_at":1775230822,"language":"en","parent_doi":null,"published_at":1775225594,"reference":[{"unstructured":"Maier, Gerhard. 2003. African Dinosaurs Unearthed: The Tendaguru Expeditions. Indiana University Press, Bloomington and Indianapolis, 380 p."},{"id":"https://www.miketaylor.org.uk/dino/pubs/taylor-et-al-2025/TaylorEtAl2025--history-and-composition-of-the-Carnegie-Diplodocus.pdf","unstructured":"Taylor, Michael P., Amy C. Henrici, Linsly J. Church, Ilja Nieuwland and Matthew C. Lamanna. 2025. The history and composition of the Carnegie Diplodocus. Annals of the Carnegie Museum 91(1):55\u201391. https://doi.org/10.2992/007.091.0104"}],"registered_at":0,"relationships":[],"rid":"ya3r2-3sb74","status":"active","summary":"Back in 2010, I wrote about early artistic depictions of\n<em>\n Brachiosaurus\n</em>\n(including\n<em>\n Giraffatitan\n</em>\n). There, I wrote of the iconic mount MB.R.2181 (then HMN S II):  (See that post for the drawing.)  Recently the historian Ilja Nieuwland (one of the authors on our recent paper on the Carnegie\n<em>\n Diplodocus\n</em>\n, Taylor et al. 2025) sent me two photos of this unveiling, again with swastikas prominent in the background:\n<strong>\n EEN\n</strong>","tags":["Brachiosaurids","Giraffatitan","History"],"title":"The Nazi sauropod \u2014 <i>Giraffatitan</i> (= \u201c<i>Brachiosaurus</i>\u201c) <i>brancai</i> in 1937","updated_at":1775227439,"url":"https://svpow.com/2026/04/03/the-nazi-sauropod-giraffatitan-brachiosaurus-brancai-in-1937/","version":"v1"},{"abstract":"Unsere monatliche Rubrik zu aktuellen Veranstaltungen rund um Open Research.","archive_url":null,"authors":[{"contributor_roles":[],"family":"Fischer","given":"Georg","url":"https://orcid.org/0000-0001-5620-5759"}],"blog":{"archive_collection":22141,"archive_host":null,"archive_prefix":"https://wayback.archive-it.org/22141/20231105110201/","archive_timestamps":[20231105110201,20240505180741,20241105110207,20250505110216],"authors":null,"canonical_url":null,"category":"otherSocialSciences","community_id":"52aefd81-f405-4349-b080-754395a5d8b2","created_at":1694476800,"current_feed_url":null,"description":null,"doi_as_guid":false,"favicon":"https://rogue-scholar.org/api/communities/52aefd81-f405-4349-b080-754395a5d8b2/logo","feed_format":"application/atom+xml","feed_url":"https://blogs.fu-berlin.de/open-research-berlin/feed/atom/","filter":null,"funding":null,"generator":"WordPress","generator_raw":"WordPress 6.0","home_page_url":"https://blogs.fu-berlin.de/open-research-berlin/","id":"575d6b2d-c555-4fc7-99fb-055a400f9163","indexed":false,"issn":null,"language":"de","license":"https://creativecommons.org/licenses/by/4.0/legalcode","mastodon":"https://berlin.social/@openaccess","prefix":"10.59350","registered_at":1729602098,"relative_url":null,"ror":null,"secure":true,"slug":"oaberlin","status":"active","subfield":"1802","subfield_validated":null,"title":"Open Research Office Berlin","updated_at":1775289050.194723,"use_api":true,"use_mastodon":true,"user_id":"383c62ed-0cf6-4dc7-a56c-5b0104f7f10a"},"blog_name":"Open Research Office Berlin","blog_slug":"oaberlin","content_html":"<p>Unsere monatliche Rubrik zu aktuellen Veranstaltungen rund um Open Research.</p>\n<p><!--more--></p>\n<pre>Anmerkung zu dieser Rubrik: Das Open Research Office Berlin erstellt monatlich eine \u00dcbersicht \u00fcber Termine und Veranstaltungen zu Open Access und Open Research in Berlin bzw. an Berliner Einrichtungen. Der Fokus liegt dabei auf unseren Partnereinrichtungen und auf Veranstaltungen, die sich an die \u00d6ffentlichkeit richten bzw. die offen sind f\u00fcr Angeh\u00f6rige der Wissenschafts- und Kulturerbeeinrichtungen in Berlin. Wir erg\u00e4nzen diese Liste gerne (Info bitte via <a href=\"mailto:team@open-research-berlin.de\">Mail</a> ans OROB).</pre>\n<h2>31. M\u00e4rz, Webarchivierung f\u00fcr viele: Expertise und Infrastruktur gemeinschaftlich aufbauen, Berlin</h2>\n<p><em>Jeden Tag geht ein Teil unseres digitalen Kulturerbes unwiederbringlich verloren \u2013 Netzliteratur, Websites, Social-Media-Beitr\u00e4ge und viele weitere Online-Inhalte verschwinden, ohne dass wir es bemerken. Dabei gibt es l\u00e4ngst Wege, dieses Erbe zu bewahren: Gemeinsam mit den Expert:innen Claus-Michael Schlesinger und Mona Ulrich hat die Zentral- und Landesbibliothek Berlin (ZLB) in den letzten zwei Jahren Workshops zu den Tools von Webrecorder veranstaltet, mit denen man Webseiten archivieren kann. Um diese Tools f\u00fcr umf\u00e4ngliche Archivierungsvorhaben zu nutzen, braucht es Ressourcen \u2013 zum Beispiel IT-Ressourcen, die nur sehr wenigen Institutionen zur Verf\u00fcgung stehen. Workshop-Teilnehmer:innen aus kleineren Institutionen und Projekten fragten sich daher immer wieder, wie sie sie langfristig nutzen k\u00f6nnen.</em></p>\n<ul>\n<li><strong>Termin: </strong>31.03.2026, 16:00 bis 18:00 Uhr, Technologiestiftung Berlin, 4. Etage, Grunewaldstr. 61-62, 10825 Berlin</li>\n<li><strong>Organisiert von</strong>: kulturBdigital</li>\n<li>[<a href=\"https://www.kultur-b-digital.de/webarchivierung-fuer-viele-expertise-und-infrastruktur-gemeinschaftlich-aufbauen/\">Information und Anmeldung</a>]</li>\n</ul>\n<h2>13. April, Machine-Learning-Montag I: What the Hype? Eine Einf\u00fchrung in die Grundlagen des maschinellen Lernens f\u00fcr Kulturerbeinstitutionen, online</h2>\n<p><em>Maschinelles Lernen (ML) oder auch \u201eK\u00fcnstliche Intelligenz\u201c (KI) ist weiterhin das gro\u00dfe Thema in fast allen Bereichen des menschlichen Arbeitens. Aber was offerieren diese Werkzeuge abseits des gro\u00dfen Hypes von \u201eschneller, gr\u00f6\u00dfer, besser, einfacher und sch\u00f6ner\u201c und dem damit prognostizierten Durchdringen aller Lebensbereiche?\u00a0Diese digiS-Einf\u00fchrung hat zum Ziel, Nicht-Expert:innen im maschinellen Lernen das n\u00f6tige Hintergrundwissen zu vermitteln, um sich in diesem Diskurs zurechtzufinden und Hype von sinnvoller Anwendung unterscheiden zu k\u00f6nnen.</em></p>\n<ul>\n<li><strong>Termin: </strong>13.04.2026, 10:00 bis 12:30 Uhr</li>\n<li><strong>Organisiert von</strong>: digiS; Referent*innen: Xenia Kitaeva und Marco Klindt (digiS)</li>\n<li>[<a href=\"https://www.digis-berlin.de/machine-learning-montag-am-13-april-what-the-hype/\">Information und Anmeldung</a>]</li>\n</ul>\n<h2>14. April, FDM@BUA: Offboarding Template als Grundlage f\u00fcr Daten- und Wissens\u00fcbergabe in Projekten, online</h2>\n<p><em>Dr. Stefanie Seltmann, Research Data Steward am Berlin Institute of Health, stellt vor, wie sich der Transfer von Forschungsdaten und projektbezogenem Wissen beim Ausscheiden von Projektmitgliedern systematisch gestalten l\u00e4sst.\u00a0Im Mittelpunkt steht ein entwickeltes Offboarding-Template, das als strukturierte Grundlage f\u00fcr Daten- und Wissens\u00fcbergabe dient. Ziel ist es, die Kontinuit\u00e4t in Forschungsprojekten zu sichern, die Qualit\u00e4t der Dokumentation zu verbessern und das Risiko von Datenverlusten zu reduzieren. Das Template ist so konzipiert, dass es flexibel an unterschiedliche Forschungskontexte angepasst und in bestehende institutionelle FDM-Prozesse integriert werden kann.</em></p>\n<ul>\n<li><strong>Termin: </strong>14.04.2026, 10:00 bis 11:30 Uhr, online via Webex</li>\n<li><strong>Organisiert von</strong>: Berlin University Alliance</li>\n<li>[<a href=\"https://www.berlin-university-alliance.de/commitments/sharing-resources/shared-resources-center/CARDS-FDM/cards_events/2026-04-14_offboarding.html\">Information und Anmeldung</a>]</li>\n</ul>\n<h2>15. April, Datenmanagementpl\u00e4ne und der RDMO-Service von NFDI4Culture, online</h2>\n<p><em>Sie sind digital k\u00fcnstlerisch oder gestalterisch t\u00e4tig und wollen die bei Ihrer Arbeit anfallenden Daten so managen, dass andere damit arbeiten k\u00f6nnen? Sie sind eine Hochschuleinrichtung, die Daten aus studentischen Arbeiten oder wissenschaftlichen Projekten im Bereich der K\u00fcnste entgegennimmt?\u00a0Der Research Data Management Organiser (RDMO) ist ein flexibles und kostenfreies Werkzeug, das Sie beim Management Ihrer Daten und bei der Planung von digitalen Projekten aller Art unterst\u00fctzen kann.</em></p>\n<ul>\n<li><strong>Termin: </strong>15.04.2026, 15:00 bis 17:00 Uhr, online via Webex</li>\n<li><strong>Organisiert von</strong>: Fokusgruppe OA-K\u00fcnste, open-access.network</li>\n<li>[<a href=\"https://open-access.network/vernetzen/digitale-fokusgruppen/fokusgruppe-oa-kuenste#c28672\">Information</a>]</li>\n</ul>\n<h2>16.-30. April, Open Science Hardware Workshops, TU Berlin</h2>\n<p><em>Open Science Hardware (OSH) enables researchers to design, prototype, document, and share custom research tools in a transparent and reproducible way. It is often facilitated by the use of digital manufacturing, which combines computer aided design and computer aided manufacturing software with machines like 3d printers, laser cutter and CNC milling machines.\u00a0In April, several introductory workshops will invite life science researchers and technical staff including the Neurosciene community to explore how digital fabrication and structured documentation can strengthen research practice \u2014 from cost-efficient prototyping, publishable hardware to the strengthening of research communities. No prior experience required.</em></p>\n<ul>\n<li><strong>Termin: </strong>16. bis 30.04.2026, Universit\u00e4tsbibliothek der TU Berlin bzw. Campus der Humboldt-Universit\u00e4t zu Berlin</li>\n<li><strong>Organisiert von</strong>: Berlin University Alliance</li>\n<li>[<a href=\"https://events.tu-berlin.de/de/events/019d2fd3-e17f-73fa-be53-5f672d77b504?scopeFilter%5Bpublicly_visible%5D=true&amp;scopeFilter%5Bhidden_in_lists%5D=false&amp;scopeFilter%5Bended%5D=false&amp;page%5Bnumber%5D=1&amp;page%5Bsize%5D=50&amp;page%5Btotal%5D=9&amp;sort%5B0%5D=-pinned&amp;sort%5B1%5D=start_at&amp;sort%5B2%5D=title\">Information und Anmeldung</a>]</li>\n</ul>\n<h2>20. April, Workshop Open Access in und f\u00fcr Museen, Europa-Universit\u00e4t Frankfurt/Oder</h2>\n<p><em>Anhand von mehreren Anwendungsf\u00e4llen wollen wir kooperative Ans\u00e4tze f\u00fcr Open Access und Open Culture an der Schnittstelle von Kultureinrichtungen, Hochschulen und Open-Access-Publikationsunterst\u00fctzungsinfrastrukturen explorieren und die Entwicklung eines konzeptionellen Rahmens f\u00fcr m\u00f6gliche L\u00f6sungen vorbereiten.\u00a0Die Veranstaltung richtet sich an in diesen Bereichen t\u00e4tigen Professionals.</em></p>\n<ul>\n<li><strong>Termin: </strong>20.04.2026, Europa-Universit\u00e4t Frankfurt/Oder</li>\n<li><strong>Organisiert von</strong>: Europa-Universit\u00e4t Viadrina, Stiftung Kleist-Museum Frankfurt (Oder) und Vernetzungs- und Kompetenzstelle Open Access Brandenburg (VuK)</li>\n<li>[<a href=\"https://open-access-brandenburg.de/workshop-open-access-in-und-fuer-museen-euv_2026/\">Information und Anmeldung</a>]</li>\n</ul>\n<h2>20. April, Wikidata f\u00fcr die Sammlungserschlie\u00dfung, online</h2>\n<p><em><a href=\"https://www.wikidata.org/wiki/Wikidata:Main_Page\">Wikidata</a> ist ein gro\u00dfer, generischer, offener, frei editierbarer Wissensgraph, der Informationen buchst\u00e4blich \u00fcber Gott (<a href=\"http://www.wikidata.org/entity/Q190\">Q190</a>) und die Welt (<a href=\"http://www.wikidata.org/entity/Q2\">Q2</a>) vorh\u00e4lt \u2013 sowie \u00fcber mehr als 120 Millionen andere Entit\u00e4ten (<a href=\"https://www.wikidata.org/wiki/Wikidata:Statistics\">https://www.wikidata.org/wiki/Wikidata:Statistics</a>). F\u00fcr GLAM-Einrichtungen ist das Potential von Wikidata erheblich: In Wikidata lassen sich Informationen zu Objekten, Personen, Orten, Bauwerken und vielem mehr pflegen, und es k\u00f6nnen bei Bedarf neue Datens\u00e4tze erstellt werden. Wikidata ist somit als flexibler ad-hoc-Normdatengenerator eine optimale Erg\u00e4nzung zur Gemeinsamen Normdatei (GND). [&#8230;]\u00a0\u00dcber all diese Dinge werden wir im digiS-Workshop \u201eWikidata f\u00fcr die Sammlungserschlie\u00dfung\u201c sprechen, um auf diese Weise das Potenzial von Wikidata f\u00fcr GLAM-Institutionen und speziell f\u00fcr die Sammlungsdokumentation genauer in den Blick zu nehmen. Selbstverst\u00e4ndlich wird es Raum f\u00fcr Fragen und Diskussionen geben, eine konkrete Einf\u00fchrung in die praktische Arbeit mit Wikidata und den angesprochenen Tools ist f\u00fcr diese Veranstaltung jedoch nicht vorgesehen.</em></p>\n<ul>\n<li><strong>Termin: </strong>20.04.2026, 10:00 bis 11:30 Uhr, online</li>\n<li><strong>Organisiert von</strong>: digiS; Referent: Alexander Winkler (digiS)</li>\n<li>[<a href=\"https://www.digis-berlin.de/workshop-wikidata-fuer-die-sammlungserschliessung-am-20-04/\">Information und Anmeldung</a>]</li>\n</ul>\n<h2>22.-23. April, FDM@BUA Workshop &#8222;Train-the-Trainer Forschungsdatenmanagement&#8220;, FU Berlin</h2>\n<div class=\"editor-content box-event-doc-abstract\">\n<p><em>Kompetenzen im Umgang mit Forschungsdaten sind eine zentrale Grundvoraussetzung f\u00fcr moderne Wissenschaft: Ohne eine gute Dokumentation und Nachhaltung gibt es keine FAIR (Findable, Accessible, Interoperable, Re-usable) Daten. Um diese Kompetenzen an Forschende in vielen F\u00e4chern und Institutionen der Berlin University Alliance zu vermitteln, braucht es ausgebildete Trainer*innen. Das Projekt\u00a0<a href=\"https://www.berlin-university-alliance.de/commitments/sharing-resources/shared-resources-center/CARDS-FDM/index.html\">Collaboratively Advancing Research Data Support</a><a href=\"https://www.berlin-university-alliance.de/commitments/sharing-resources/shared-resources-center/CARDS-FDM/index.html\">(CARDS)</a>bietet daher im April 2026 einen\u00a0<a href=\"https://rti-studio.com/train-the-trainer-workshop-zum-thema-forschungsdatenmanagement/\">Train-the-Trainer Workshop</a>\u00a0zu Forschungsdatenmanagement mit\u00a0<a href=\"https://rti-studio.com/ueber-mich/\">Dr. Katarzyna Biernacka</a>\u00a0an.\u00a0Nach dem zweit\u00e4gigen Workshop werden die Teilnehmenden \u00fcber die notwendigen F\u00e4higkeiten verf\u00fcgen, um eigene Trainings und Beratungen zum Forschungsdatenmanagement in ihrer Einrichtung durchzuf\u00fchren.</em></p>\n</div>\n<ul>\n<li><strong>Termin: </strong>22-23.04.2026, Rostlaube an der Freien Universit\u00e4t Berlin</li>\n<li><strong>Organisiert von</strong>: Berlin University Alliance; Referentin: Katarzyna Biernacka</li>\n<li>[<a href=\"https://www.fu-berlin.de/sites/forschungsdatenmanagement/veranstaltungen/2026/2026-04-22-23-FDMatBUA-Workshop-T-t-T-en-KB.html\">Information und Anmeldung</a>]</li>\n</ul>\n<h2>23. April, Magnifying Open Science: Insights from the BUA Participatory Research Map, online</h2>\n<p><em>Open Engagement with societal stakeholders is one of the four pillars of the UNESCO Recommendation on Open Science. The Berlin University Alliance Participatory Research Map maps over 90 projects in which researchers collaborate with societal stakeholders. With the Participatory Research Map, we not only want to increase the visibility of participatory research but also explore how different stakeholders and research modes contribute to open science and open knowledge generation.\u00a0In this event, we will present the results of our analysis and discuss with participants how we can collaboratively contribute to magnifying openness in engaging with societal stakeholders.</em></p>\n<ul>\n<li><strong>Termin: </strong>23.04.2026, online</li>\n<li><strong>Organisiert von</strong>: BUA funded project &#8222;Magnifying Open Science&#8220; (Open Research Office Berlin)</li>\n<li>[<a href=\"https://blogs.fu-berlin.de/open-research-berlin/2025/12/18/save-the-date-for-online-event-series-magnifying-open-science/\">Information und Anmeldung</a>]</li>\n</ul>\n<h2>27. April, Machine Learning Montag II: KI und Recht f\u00fcr Kulturerbe-Einrichtungen &#8211; Vortrag und Q&amp;A, online</h2>\n<p><em> F\u00fcr viele Kulturerbe-Einrichtungen stellt sich die Frage, wie der Einsatz von KI in unterschiedlichen Konstellationen rechtlich zu bewerten ist. Da bei der rechtlichen Bewertung noch viele Unsicherheiten bestehen, soll dieser Workshop den aktuellen Stand der Rechtsprechung sowie auch der Gesetzgebung in Hinblick auf KI erl\u00e4utern. Darauf aufbauend wird die Rechtslage bei verschiedenen Anwendungsbereichen in Kulturerbe-Einrichtungen untersucht.</em></p>\n<ul>\n<li><strong>Termin: </strong>27.04.2026, 10:00 bis 12:30 Uhr, online via Zoom</li>\n<li><strong>Organisiert von</strong>: digiS; Referent: Paul Klimpel (iRights.Law)</li>\n<li>[<a href=\"https://www.digis-berlin.de/machine-learning-montag-ii-am-27-april-ki-und-recht-fuer-kulturerbe-einrichtungen-vortrag-und-qa/\">Information und Anmeldung</a>]</li>\n</ul>\n<h2>29. April, Workshop Research Data Management in a nutshell, online</h2>\n<p><em>Almost every research project generates or collects digital research data. Researchers face the challenge of not only managing and documenting the data, but also preserving it and making it available for reuse. This online seminar offers a general introduction to essential aspects of research data management.</em></p>\n<ul>\n<li><strong>Termin: </strong>29.04.2026, 09:30 bis 12:00 Uhr, online</li>\n<li><strong>Organisiert von</strong>: Freie Universit\u00e4t Berlin</li>\n<li>[<a href=\"https://www.fu-berlin.de/sites/forschungsdatenmanagement/veranstaltungen/2026/2026-04-29-Workshop-RDM-in-a-nutshell-en-DM.html\">Information und Anmeldung</a>]</li>\n</ul>\n<h2>30. April, #UPDATE BIB: Open Access zu wissenschaftlichen Publikationen &#8211; Aktuelle Herausforderungen f\u00fcr Bibliotheken, online</h2>\n<p><em>Das Seminar bietet eine \u00fcbersichtliche Einf\u00fchrung in den Stand von Open Access an Bibliotheken und stellt die wichtigsten aktuellen Rahmenbedingungen und Entwicklungen vor. Die Teilnehmer*innen lernen die Grundbegriffe von Open Access kennen und verstehen die technischen, rechtlichen und politischen Rahmenbedingungen freier Verf\u00fcgbarkeit von wissenschaftlichen Publikationen. Die Entwicklungen zu Open Access werden im mit Blick auf verschiedene bibliothekarische Handlungsfelder kontextualisiert, wie Erwerbung/Zugang, Informationskompetenz, Forschungsunterst\u00fctzung, technische Infrastrukturen.</em></p>\n<ul>\n<li><strong>Termin: </strong>30.04.2026, 10:00 bis 12:30 Uhr, online</li>\n<li><strong>Organisiert von</strong>: FU Berlin; Referentin: Christina Riesenweber (HU Berlin)</li>\n<li>[<a href=\"https://veranstaltung.weiterbildung.fu-berlin.de/Veranstaltung/cmx64801e98a27ed.html\">Information und Anmeldung</a>]</li>\n</ul>\n<h2>30. April, Open Access meets KI \u2013 L\u00f6sungsans\u00e4tze durch CC-Signals, online</h2>\n<p><em>Um <a href=\"https://creativecommons.org/2025/06/25/introducing-cc-signals-a-new-social-contract-for-the-age-of-ai/\">\u201eoffenes Wissen zu bewahren, [\u2026 und] verantwortungsbewusstes KI-Verhalten [zu] f\u00f6rdern, ohne dabei Innovationen einzuschr\u00e4nken\u201c</a>, hat Creative Commons vor kurzem ein neues Modell vorgestellt: CC Signals. Rechteinhaber*innen sollen so die M\u00f6glichkeit haben, zu signalisieren, unter welchen Voraussetzungen ihre Inhalte von KI-Systemen genutzt werden d\u00fcrfen.\u00a0In unserem n\u00e4chsten ENABLE!-Werkstatt-Gespr\u00e4ch wollen wir uns CC Signals n\u00e4her ansehen und mit unseren Referent*innen diskutieren, wie dieses Modell funktioniert und was wir davon erwarten k\u00f6nnen.</em></p>\n<ul>\n<li><strong>Termin: </strong>30.04.2026, 16:00 bis 17:00 Uhr, online</li>\n<li><strong>Organisiert von</strong>: ENABLE! Community</li>\n<li>[<a href=\"https://enable-oa.org/\">Information</a>]</li>\n</ul>\n<p>weiter zu Mai 2026 [folgt in K\u00fcrze]</p>\n","doi":"https://doi.org/10.59350/s4xat-69z93","funding_references":null,"guid":"https://blogs.fu-berlin.de/open-research-berlin/?p=4021","id":"6a3635b0-a652-448e-addb-627b5bf812d3","image":null,"indexed":true,"indexed_at":1775206819,"language":"de","parent_doi":null,"published_at":1775206767,"reference":[],"registered_at":0,"relationships":[],"rid":"vtt21-qgh66","status":"active","summary":"Unsere monatliche Rubrik zu aktuellen Veranstaltungen rund um Open Research. Anmerkung zu dieser Rubrik: Das Open Research Office Berlin erstellt monatlich eine \u00dcbersicht \u00fcber Termine und Veranstaltungen zu Open Access und Open Research in Berlin bzw. an Berliner Einrichtungen. Der Fokus liegt dabei auf unseren Partnereinrichtungen und auf Veranstaltungen, die sich an die \u00d6ffentlichkeit richten bzw.","tags":["Veranstaltungshinweise"],"title":"Veranstaltungshinweise April 2026","updated_at":1775206767,"url":"https://blogs.fu-berlin.de/open-research-berlin/2026/04/03/veranstaltungshinweise-april-2026/","version":"v1"},{"abstract":"I am writing this blog with a heavy heart.\u00a0 After 21 years and 2,000 blogs I have taken the decision to \u2018rest\u2019 the website after Easter.\u00a0 My reasons are varied.","archive_url":null,"authors":[{"contributor_roles":[],"family":"Akass","given":"Kim"}],"blog":{"archive_collection":null,"archive_host":null,"archive_prefix":null,"archive_timestamps":null,"authors":null,"canonical_url":null,"category":"mediaAndCommunications","community_id":"d0965544-4413-4b89-aedb-36ae2153c1ac","created_at":1730394736,"current_feed_url":null,"description":"Television Studies Blog","doi_as_guid":false,"favicon":"https://rogue-scholar.org/api/communities/d0965544-4413-4b89-aedb-36ae2153c1ac/logo","feed_format":"application/atom+xml","feed_url":"https://cstonline.net/feed/atom/","filter":null,"funding":null,"generator":"WordPress","generator_raw":"WordPress 6.7.1","home_page_url":"https://cstonline.net/","id":"3e29853c-05ee-479f-aa7d-867ff6dce1e9","indexed":true,"issn":null,"language":"en","license":"https://creativecommons.org/licenses/by/4.0/legalcode","mastodon":null,"prefix":"10.59350","registered_at":0,"relative_url":null,"ror":null,"secure":true,"slug":"cstonline","status":"active","subfield":"3315","subfield_validated":null,"title":"CST Online","updated_at":1775288937.968264,"use_api":true,"use_mastodon":false,"user_id":"80307be4-0a5d-4378-a38f-91852e38c1d8"},"blog_name":"CST Online","blog_slug":"cstonline","content_html":"<p style=\"font-weight: 400;\">I am writing this blog with a heavy heart.\u00a0 After 21 years and 2,000 blogs I have taken the decision to \u2018rest\u2019 the website after Easter.\u00a0 My reasons are varied.\u00a0 Since we started this iteration of CSTonline, with my gripe about <a href=\"https://cstonline.net/sky-exclusivity-weve-been-here-before-by-kim-akass/\">Sky Exclusivity </a>and John Ellis\u2019s <a href=\"https://cstonline.net/letter-from-america-by-john-ellis-3/\">letter from America</a>, we have had a steady stream of blogs.\u00a0\u00a0 Some weeks we were inundated and other weeks not so, but we have always received something from someone.</p>\n<p style=\"font-weight: 400;\">The idea of the website was to provide a public, open access forum, for the dissemination of writing about TV, reports from funded projects and just general \u2018this is what I saw this week\u2019.\u00a0 We always said that TV demanded instant responses, we couldn\u2019t always wait for publishers to print our thoughts \u2013 the promise of the internet meant that we could receive a blog and have it out there for reading within a week.\u00a0 Heady days.</p>\n<p style=\"font-weight: 400;\">The problem is that, over the past few years, Higher Education has been undergoing some pretty seismic changes.\u00a0 Redundancies (voluntary or otherwise), lack of funding, heavier workloads for remaining staff and increased demands from students have meant that everyone has less and less time to devote to writing that doesn\u2019t bring some kind of institutional reward.\u00a0 It makes sense that, in this case, with families to attend, books to write and students to teach, coupled with the demands of REF (or the tenure track) and a general sense of overwhelm has resulted in no blogs.</p>\n<p style=\"font-weight: 400;\">Thanks to stalwart bloggers, and a team of committed volunteers, we have managed to keep the website alive but, it has become clear that something has to change.\u00a0 Podcasts are the new (old) blogs and, despite our attempts to keep everyone interested, it is time to admit that we can no longer proceed without regular content.</p>\n<p style=\"font-weight: 400;\">We <a href=\"https://cstonline.net/cst-online-relaunch-by-kim-akass/\">re-launched CSTonline</a> in its present state on 19 February 2011.\u00a0 Early days were exciting and busy.\u00a0 My re-launch blog announced that \u2018We are retaining David Lavery\u2019s column <em>Telegenic</em>, with his insightful and humorous look at all things televisual.\u00a0\u00a0<em>In Primetime</em>\u00a0stays and so do the regularly updated sections \u2013 Calls For Papers, upcoming conferences, workshops and study days (listed monthly), postgraduate funding the (very) occasional job vacancy and my favourite TV story of the week (or sometimes day) complete with moving pictures.\u2019</p>\n<p style=\"font-weight: 400;\">Even someone as prolific as David Lavery, however, found it difficult to keep up with blogging demands and called \u2018Telegenic\u2019 quits after his blog on <em><a href=\"https://cstonline.net/the-state-of-the-american-sitcom-v-modern-family-by-david-lavery/\">Modern Family</a></em>.\u00a0 He <a href=\"https://cstonline.net/?s=Lavery\">continued to blog for us</a> until he sadly died on 30 August 2016.\u00a0 <a href=\"https://cstonline.net/?s=Pixley\">Andrew Pixley</a> has been one of our more prolific bloggers as has <a href=\"https://cstonline.net/?s=Beattie\">Melissa Beattie</a>.\u00a0 I have <a href=\"https://cstonline.net/?s=Akass\">written a few over the years</a> as has the aforementioned <a href=\"https://cstonline.net/?s=Ellis\">John Ellis</a>.\u00a0 <a href=\"https://cstonline.net/?s=Weissmann\">Elke Weissmann</a> has been prolific as well as editing and managing ECREA\u2019s contributions (for which I am grateful). \u00a0We have featured blogs from all over the world about subjects relevant to TV from Public Service Broadcasting to commercial dramas, streaming, cable, networks, social media \u2026 the list goes on.</p>\n<p style=\"font-weight: 400;\">I am sure that the community has much more to say about the state of television.\u00a0 Streaming has up-ended the industry, as has the introduction of AI, the writer\u2019s strikes and the continued (and continual) attack on the BBC. There is always something to say but, unfortunately, not always the time to say it.</p>\n<p style=\"font-weight: 400;\">I continue to be passionate about TV, I love watching, reading about and writing about television.\u00a0 I am sure there are people out there that want to blog, and we will always publish if someone wants to submit something.\u00a0 However, I reluctantly admit that, if I can\u2019t find the time to write a blog, why should I expect others to?</p>\n<p style=\"font-weight: 400;\">I am so very grateful for the amazing support I have had over the years.\u00a0 Debra Ramsay, Lisa Kelly, Sarah Lahm and Ben Keightly have served faithfully (if I have forgotten someone I apologise).\u00a0 I have received institutional support from Royal Holloway and the University of Hertfordshire.\u00a0 The editorial board at <em>Critical Studies in Television</em> have been amazing.\u00a0 This website would never have got off the ground without mediacitizens who freely gave of designers and web hosting.\u00a0 My most grateful thanks go to Tobias Steiner who continues to work hard on the back end of the website.\u00a0 All of this time and hard work has been freely and generously given.</p>\n<p style=\"font-weight: 400;\">The website will remain online \u2013 there is a wealth of television history contained in its massive archive and I do hope you will continue to read and engage with it.</p>\n<p style=\"font-weight: 400;\">But, until the next iteration of the website, we are reluctantly calling time on this endeavour.</p>\n<div style=\"width: 480px;\" class=\"wp-video\"><video class=\"wp-video-shortcode\" id=\"video-15775-1\" width=\"480\" height=\"360\" preload=\"metadata\" controls=\"controls\"><source type=\"video/mp4\" src=\"https://cstonline.net/wp-content/uploads/2026/04/YTDown.com_YouTube_Bugs-Bunny-That-s-All-Folks_Media_HeERupuicHE_001_360p.mp4?_=1\" /><a href=\"https://cstonline.net/wp-content/uploads/2026/04/YTDown.com_YouTube_Bugs-Bunny-That-s-All-Folks_Media_HeERupuicHE_001_360p.mp4\">https://cstonline.net/wp-content/uploads/2026/04/YTDown.com_YouTube_Bugs-Bunny-That-s-All-Folks_Media_HeERupuicHE_001_360p.mp4</a></video></div>\n","doi":"https://doi.org/10.59350/149p8-3jh82","funding_references":null,"guid":"https://cstonline.net/?p=15775","id":"37b623ec-0fd6-45c1-b384-536b7142f175","image":"https://cstonline.net/wp-content/uploads/2026/04/Past-Future-image-2021-1024x421-1.jpg","indexed":true,"indexed_at":1775205403,"language":"en","parent_doi":null,"published_at":1775203941,"reference":[],"registered_at":0,"relationships":[],"rid":"c3h28-yep51","status":"active","summary":"I am writing this blog with a heavy heart.\u00a0 After 21 years and 2,000 blogs I have taken the decision to \u2018rest\u2019 the website after Easter.\u00a0 My reasons are varied.\u00a0 Since we started this iteration of CSTonline, with my gripe about Sky Exclusivity and John Ellis\u2019s letter from America, we have had a steady stream of blogs.","tags":["Blogs"],"title":"CSTonline by Kim Akass","updated_at":1775204127,"url":"https://cstonline.net/cstonline-by-kim-akass/","version":"v1"},{"abstract":"2 days with up to 100+ papers in 30+ panels, 4 keynote events, lunches and refreshment breaks for both days, optional self-funded conference meal, student rates (and lottery free spaces) and campus accommodation available \u2013 Talbot Campus \u2013 Bournemouth University DEADLINE FOR SUBMISSION 3 May 2026 The Centre for the Study of Conflict, Emotion and [\u2026]","archive_url":null,"authors":[{"contributor_roles":[],"family":"Akass","given":"Kim"}],"blog":{"archive_collection":null,"archive_host":null,"archive_prefix":null,"archive_timestamps":null,"authors":null,"canonical_url":null,"category":"mediaAndCommunications","community_id":"d0965544-4413-4b89-aedb-36ae2153c1ac","created_at":1730394736,"current_feed_url":null,"description":"Television Studies Blog","doi_as_guid":false,"favicon":"https://rogue-scholar.org/api/communities/d0965544-4413-4b89-aedb-36ae2153c1ac/logo","feed_format":"application/atom+xml","feed_url":"https://cstonline.net/feed/atom/","filter":null,"funding":null,"generator":"WordPress","generator_raw":"WordPress 6.7.1","home_page_url":"https://cstonline.net/","id":"3e29853c-05ee-479f-aa7d-867ff6dce1e9","indexed":true,"issn":null,"language":"en","license":"https://creativecommons.org/licenses/by/4.0/legalcode","mastodon":null,"prefix":"10.59350","registered_at":0,"relative_url":null,"ror":null,"secure":true,"slug":"cstonline","status":"active","subfield":"3315","subfield_validated":null,"title":"CST Online","updated_at":1775288937.968264,"use_api":true,"use_mastodon":false,"user_id":"80307be4-0a5d-4378-a38f-91852e38c1d8"},"blog_name":"CST Online","blog_slug":"cstonline","content_html":"<div><b>2 days with up to 100+ papers in 30+ panels, 4 keynote events, lunches and refreshment </b><strong>breaks for both days, optional self-funded conference meal, student rates (and lottery free spaces) and campus accommodation available \u2013 </strong><a href=\"https://www.bournemouth.ac.uk/why-bu/facilities-campuses/talbot-campus\"><strong>Talbot Campus \u2013 Bournemouth University</strong></a></div>\n<p style=\"font-weight: 400;\"><strong>DEADLINE FOR SUBMISSION 3 May 2026</strong></p>\n<p style=\"font-weight: 400;\"><a href=\"https://www.bournemouth.ac.uk/research/centres-institutes/centre-study-conflict-emotion-social-justice\">The Centre for the Study of Conflict, Emotion and Social Justice</a>, in the Faculty of Media, Science and Technology at Bournemouth University invites scholarly and practice-based proposals for an in-person conference on media and emotion.</p>\n<p style=\"font-weight: 400;\">As neuroscientist Raymond J. Dolan observes, \u201cemotion provides the principal currency in human relationships as well as the motivational force for what is best and worst in human behaviour\u201d (2002). Within contemporary media production and consumption, emotion often binds us together, at times appearing as a language of intimacy, vulnerability and reflexivity, and at times appearing as a language of division, entitlement and exclusion. Therefore, emotions expressed and evoked through media have attracted sustained scholarly attention across a wide range of disciplines, spanning the humanities, the social sciences, and the natural sciences.</p>\n<p style=\"font-weight: 400;\">Notably, in the era of populism, political leaders deploy emotionally charged narratives, in offering simple answers to complex problems, often with minority groups as the targets of division and abjection.\u00a0Also, techniques of production and representation deploy the language of emotion, in aesthetic and narrative-oriented contexts, and theoretical work is constantly evolving.</p>\n<p style=\"font-weight: 400;\">As Laura U. Marks discussed in her landmark text <em>The Skin of Film</em> (1999), contemporary media offers a creative space for issues of touch, memory and hegemonic challenge, invigorated through a media-based emotional landscape. At the same time Sara Ahmed has theorised in <em>The Cultural Politics of Emotion</em> (2014), that \u2018affective economies\u2019 and \u2018sticky associations\u2019 shape our phenomenological landscapes, defining boundaries for minority voices as much as offering spaces for resistance and reinvention.</p>\n<p style=\"font-weight: 400;\">We invite scholars from any related disciplines and industry practitioners to participate in this conference and share critical perspectives on media and emotion, drawing on their theoretical models, research trajectories or practice-based environments. Our keynote speakers, Kristyn Gorton, Kim Akass and Lisa Blackman, and our Industry keynote panel led by Christa van Raalte (see below), will offer insights into media affects and their intersection with scholarly and practice-based approaches.</p>\n<p style=\"font-weight: 400;\"><strong>AREAS OF INQUIRY (not exhaustive)</strong></p>\n<table style=\"font-weight: 400;\" width=\"662\">\n<tbody>\n<tr>\n<td width=\"662\">\u25cf\u00a0\u00a0\u00a0\u00a0\u00a0 <strong>Emotional states</strong>, such as anger, anomie, confusion, compulsion, contempt, disgust, dissociation, fear, happiness, indifference, joy, longing, nihilism, rage, regret, shame, surprise.</td>\n</tr>\n<tr>\n<td width=\"662\">\u25cf\u00a0\u00a0\u00a0\u00a0\u00a0 <strong>Practice oriented contexts</strong>, such as broadcasting, cinematography, directing, distribution, drama, documentary, editing, journalism, liveness, marketing, streaming, social media, touchscreen technology, workplace.</td>\n</tr>\n<tr>\n<td width=\"662\">\u25cf\u00a0\u00a0\u00a0\u00a0\u00a0 <strong>Political and social worlds</strong>, such as Brexit, Covid-19, citizenship, community, Gaza, disability, ethnicity, inclusivity, nationality, neoliberalism, race, religion, Sudan, Thatcherism, Trump, Ukraine.</p>\n<p>\u25cf\u00a0\u00a0\u00a0\u00a0\u00a0 <strong>Theoretical models</strong>, relating to concepts, such as affect, alienation, behaviour, cognition, community, colonialism, consumption, embodiment, gender, genre, identity, inclusivity, memory, minority, nostalgia, orientalism, otherness, pastiche, post-colonialism, phenomenology, reasoning, regulation, representation, sexuality, surrealism, social realism, trauma.</td>\n</tr>\n</tbody>\n</table>\n<p style=\"font-weight: 400;\"><strong>SUBMIT YOUR PROPOSALS:</strong></p>\n<p style=\"font-weight: 400;\">Please submit abstract proposals of 250 words (max) by the 3 May 2026, using the appropriate links below (as single paper or pre-formed panel):</p>\n<p style=\"font-weight: 400;\"><a href=\"https://forms.office.com/Pages/ResponsePage.aspx?id=VZbi7ZfQ5EK7tfONQn-_uKTV25ijuANLi5dE2tVQ245UQTlTMVo3WjIxOU44MzVRQldYV0hYNUdXTS4u\">Media and Emotion Conference September 2026: SINGLE PAPER PROPOSAL\u00a0\u00a0 \u2013 Fill out form</a></p>\n<p style=\"font-weight: 400;\"><a href=\"https://forms.office.com/Pages/ResponsePage.aspx?id=VZbi7ZfQ5EK7tfONQn-_uKTV25ijuANLi5dE2tVQ245UQjBBMzcxWFVDUDRJMzhaU1dLTVFRWDRXSy4u\">Media and Emotion Conference September 2026: PRE-FORMED PANEL PROPOSAL \u2013 Fill out form</a></p>\n<p style=\"font-weight: 400;\">Decisions will be announced after 15<sup>th</sup> May 2026</p>\n<p style=\"font-weight: 400;\"><strong>NB:</strong> This conference is an in-person event only, with no facility for hybrid presentations.</p>\n<p style=\"font-weight: 400;\"><strong>STUDENTS:</strong></p>\n<p style=\"font-weight: 400;\">We will also offer<strong> post</strong><strong>graduate researchers</strong> the opportunity to enter a lottery to win a <strong>registration fee waiver</strong> (with five spaces available).</p>\n<p style=\"font-weight: 400;\"><strong>REGISTRATION &amp; ACCOMMODATION</strong></p>\n<p style=\"font-weight: 400;\"><strong>Registration fee: </strong>including refreshments and lunch for two days:</p>\n<p style=\"font-weight: 400;\">\u00a3140 (students, part time employment)</p>\n<p style=\"font-weight: 400;\">\u00a3170 (full time employment)</p>\n<p style=\"font-weight: 400;\"><strong>Conference evening</strong> meal will be available under a separate invitation, at own cost.</p>\n<p style=\"font-weight: 400;\"><strong>On site campus accommodation </strong>will be available at \u00a375 for three nights (fixed price), plus \u00a325 for each additional night (over the preceding weekend)</p>\n<p style=\"font-weight: 400;\"><strong>Local hotels available</strong> at reduced conference rates.</p>\n<p style=\"font-weight: 400;\"><strong>CONFIRMED KEYNOTES: </strong><strong>\u00a0</strong></p>\n<p style=\"font-weight: 400;\"><a href=\"https://www.gold.ac.uk/media-communications/staff/blackman/\"><strong>Lisa Blackman </strong>(Professor in Media and Communications \u2013 Goldsmiths University)</a> &#8211; whose work includes:</p>\n<ul>\n<li><em>Grey Media: A Psychopolitics of Deception</em> (Punctum Books 2026).</li>\n<li><em>Haunted Data: Affect, Transmedia, Weird Science</em> (Bloomsbury 2019).</li>\n</ul>\n<p style=\"font-weight: 400;\"><strong>DECEIT AND DECEPTION:</strong> Lisa will explore media and emotion through the concept of \u2018grey media\u2019, a term which brings into alignment the long histories of apparatuses of deceit and deception which have a distinct mediality, linking the gaslighting of emotional abuse, information warfare and AI Deception.</p>\n<p style=\"font-weight: 400;\"><a href=\"https://ahc.leeds.ac.uk/arts-humanities-cultures/staff/2910/professor-kristyn-gorton\"><strong>Kristyn Gorton (Professor of Film and Television \u2013 University of Leeds)</strong></a> \u00a0-\u2013 whose work includes:</p>\n<ul>\n<li><em>Emotion Online: Theorising Affect on the Internet</em> (Palgrave 2013).</li>\n<li><em>Media Audiences: Television, Meaning and Emotion</em> (Edinburgh University Press, 2009).</li>\n</ul>\n<p style=\"font-weight: 400;\"><strong>EMPATHY AND INTIMACY:</strong>\u00a0 This paper returns to Kristyn\u2019s earlier work (as above) and engages with recent work on &#8217;empathy&#8217; and &#8216;intimacy&#8217; to reflect on the development of the field and the ways in which television constructs emotion. Kristyn will draw on examples from serial melodrama which use excess to mark out spaces for viewers to work through narratives of social justice and change. The paper will also consider how the production cultures impact and inform the affective landscape of these stories.</p>\n<p style=\"font-weight: 400;\"><strong>Kim Akass</strong> (Professor of Radio Television and Film) &#8211; whose work includes:</p>\n<ul>\n<li><em>Mothers on American Television: From Here to Maternity</em> (Manchester University Press 2023).</li>\n</ul>\n<p style=\"font-weight: 400;\"><strong>RAGE AND MOTHERHOOD</strong>: Since the overturn of Roe vs Wade in June 2022 and the resulting ban on abortion in 13 states (so far), is it surprising that we are seeing so much female rage on our screens? From postpartum psychosis in <em>Die My Love</em> (Lynne Ramsay, 2025) to <em>If I Had Legs, I Would Kick You</em> (Mary Bronstein, 2025) maternal rage is, well, all the rage. In this paper Kim will explore how female rage has emerged as a theme in film and TV and asks whether this is due to an increase in women behind the scenes or a reaction to punitive legislation against women\u2019s reproductive rights.</p>\n<p style=\"font-weight: 400;\"><a href=\"https://staffprofiles.bournemouth.ac.uk/display/cvanraalte\"><strong>Christa van Raalte</strong> (Associate Professor of Film and Television \u2013 Bournemouth University)</a> \u2013 whose work includes:</p>\n<ul>\n<li>The Good Manager in TV: Tales for the Twenty-first Century, in <em>Creative Industries Journal </em>(2024), (with Wallis, R.).</li>\n<li>More Than Just a Few \u2018Bad Apples\u2019: The Need for a Risk Management Approach to the Problem of Workplace Bullying in the UK\u2019s Television Industry, in <em>Creative Industries Journal </em>(2023), (with Wallis, R. and Pekalski, D.).</li>\n</ul>\n<p style=\"font-weight: 400;\"><strong>TV INDUSTRY PANEL: THE ECONOMICS OF EMOTION</strong>:\u00a0 Christa will also bring together a range of industry practitioners, considering how emotion works as a commodity for creativity, in artistic and workplace contexts. What are the safeguarding standards when creators, collaborators and audiences engage with productions that frame emotional media? How might media producers negotiate the polarising emotional landscape and ethical broadcasting standards when creating content?</p>\n<p style=\"font-weight: 400;\"><strong>We are looking forward to your submissions!!</strong></p>\n<p style=\"font-weight: 400;\"><strong>Conference organisers:</strong> Christopher Pullen, Catalin Brylla &amp; Savvas Voutyras of</p>\n<p style=\"font-weight: 400;\"><a href=\"https://www.bournemouth.ac.uk/research/centres-institutes/centre-study-conflict-emotion-social-justice\">The Centre for the Study of Conflict, Emotion and Social Justice</a></p>\n<p style=\"font-weight: 400;\">Bournemouth University, Faculty of Media, Science and Technology, Talbot Campus, Fern Barrow Poole, BH12 5BB.</p>\n<p style=\"font-weight: 400;\"><strong>Conference email contact: </strong><a href=\"mailto:cpullen@bournemouth.ac.uk\">cpullen@bournemouth.ac.uk</a></p>\n","doi":"https://doi.org/10.59350/zmmp8-n8w87","funding_references":null,"guid":"https://cstonline.net/?p=15784","id":"9895a0b3-b02a-44f4-b87b-fa8655fb8712","image":"https://cstonline.net/wp-content/uploads/2026/04/1773843427481.jpeg","indexed":true,"indexed_at":1775205402,"language":"en","parent_doi":null,"published_at":1775203256,"reference":[],"registered_at":0,"relationships":[],"rid":"64rbw-1zn97","status":"active","summary":"<b>\n 2 days with up to 100+ papers in 30+ panels, 4 keynote events, lunches and refreshment\n</b>\n<strong>\n breaks for both days, optional self-funded conference meal, student rates (and lottery free spaces) and campus accommodation available \u2013\n</strong>\n<strong>\n Talbot Campus \u2013 Bournemouth University\n</strong>\n<strong>\n DEADLINE FOR SUBMISSION 3 May 2026\n</strong>\nThe Centre for the Study of Conflict, Emotion and Social Justice, in the Faculty of Media,","tags":["CFPs","CFPs Conferences"],"title":"CFP: MEDIA AND EMOTION CONFERENCE \u2013 7-8 SEPTEMBER 2026","updated_at":1775203966,"url":"https://cstonline.net/cfp-media-and-emotion-conference-7-8-september-2026/","version":"v1"},{"abstract":"In my Day 1 article, I wrote that the OECD Digital Education Outlook 2026 conference documented performance gains alongside learning losses, efficiency alongside declining human competence, and the emergence of what Dragan Gasevic called \u201cmetacognitive laziness.\u201d I described a day that did not offer comfort.","archive_url":null,"authors":[{"affiliation":[{"id":"https://ror.org/04h13ss13","name":"The Geneva Learning Foundation"}],"contributor_roles":[],"family":"Sadki","given":"Reda","url":"https://orcid.org/0000-0003-4051-0606"}],"blog":{"archive_collection":null,"archive_host":null,"archive_prefix":null,"archive_timestamps":null,"authors":null,"canonical_url":null,"category":"educationalSciences","community_id":"7e26491f-41c6-4665-9088-5aa6643a1ba8","created_at":1731211871,"current_feed_url":null,"description":"Learning to make a difference","doi_as_guid":false,"favicon":null,"feed_format":"application/atom+xml","feed_url":"https://redasadki.me/feed/atom/","filter":null,"funding":null,"generator":"WordPress","generator_raw":"WordPress 6.7.1","home_page_url":"https://redasadki.me","id":"88b8caba-b485-4654-96ce-a21547abaab3","indexed":true,"issn":null,"language":"en","license":"https://creativecommons.org/licenses/by/4.0/legalcode","mastodon":"https://techhub.social/@redasadki","prefix":"10.59350","registered_at":0,"relative_url":null,"ror":null,"secure":true,"slug":"redasadki","status":"active","subfield":"3304","subfield_validated":null,"title":"Reda Sadki","updated_at":1775289073.567933,"use_api":true,"use_mastodon":false,"user_id":"0d34dfde-a007-4ec9-9bc6-7b0318fa2c5e"},"blog_name":"Reda Sadki","blog_slug":"redasadki","content_html":"<div class='__iawmlf-post-loop-links' style='display:none;' data-iawmlf-post-links='[{&quot;id&quot;:791,&quot;href&quot;:&quot;https:\\/\\/doi.org\\/10.59350\\/1bqm0-1d126&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;https:\\/\\/redasadki.me\\/2026\\/03\\/24\\/oecd-digital-education-outlook-2026-how-can-ai-help-human-beings-learn-and-grow\\/&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:792,&quot;href&quot;:&quot;https:\\/\\/ailiteracyframework.org&quot;,&quot;archived_href&quot;:&quot;http:\\/\\/web-wp.archive.org\\/web\\/20260307044252\\/https:\\/\\/ailiteracyframework.org\\/&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[{&quot;date&quot;:&quot;2026-04-03 07:58:38&quot;,&quot;http_code&quot;:200}],&quot;broken&quot;:false,&quot;last_checked&quot;:{&quot;date&quot;:&quot;2026-04-03 07:58:38&quot;,&quot;http_code&quot;:200},&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:793,&quot;href&quot;:&quot;https:\\/\\/www.science.org\\/doi\\/10.1126\\/science.adw3000&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;pending&quot;},{&quot;id&quot;:794,&quot;href&quot;:&quot;https:\\/\\/doi.org\\/10.1007\\/978-3-031-36336-8_118&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;pending&quot;},{&quot;id&quot;:795,&quot;href&quot;:&quot;https:\\/\\/doi.org\\/10.1126\\/science.adw3000&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;pending&quot;},{&quot;id&quot;:786,&quot;href&quot;:&quot;https:\\/\\/doi.org\\/10.1787\\/062a7394-en&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:671,&quot;href&quot;:&quot;https:\\/\\/doi.org\\/10.59350\\/859ed-e8148&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;https:\\/\\/redasadki.me\\/2025\\/10\\/14\\/the-great-unlearning-notes-on-the-empower-learners-for-the-age-of-ai-conference\\/&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:54,&quot;href&quot;:&quot;https:\\/\\/doi.org\\/10.59350\\/w1ydf-gd85&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;https:\\/\\/redasadki.me\\/2025\\/03\\/09\\/artificial-intelligence-accountability-and-authenticity-knowledge-production-and-power-in-global-health-crisis\\/&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:697,&quot;href&quot;:&quot;https:\\/\\/doi.org\\/10.59350\\/redasadki.20995&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;https:\\/\\/redasadki.me\\/2025\\/06\\/17\\/when-funding-shrinks-impact-must-grow-the-economic-case-for-peer-learning-networks\\/&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:28,&quot;href&quot;:&quot;https:\\/\\/doi.org\\/10.59350\\/redasadki.21123&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;https:\\/\\/redasadki.me\\/2025\\/07\\/16\\/why-peer-learning-is-critical-to-survive-the-age-of-artificial-intelligence\\/&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;done&quot;},{&quot;id&quot;:796,&quot;href&quot;:&quot;https:\\/\\/doi.org\\/10.59350\\/6rjnm-1rd08&quot;,&quot;archived_href&quot;:&quot;&quot;,&quot;redirect_href&quot;:&quot;https:\\/\\/redasadki.me\\/2026\\/03\\/13\\/introducing-claude-cardot-our-first-ai-co-worker-to-support-frontline-health-and-humanitarian-leaders\\/&quot;,&quot;checks&quot;:[],&quot;broken&quot;:false,&quot;last_checked&quot;:null,&quot;process&quot;:&quot;pending&quot;}]'></div>\n<p id=\"h-in-my-day-1-article-i-wrote-that-the-oecd-digital-education-outlook-2026-conference-documented-performance-gains-alongside-learning-losses-efficiency-alongside-declining-human-competence-and-the-emergence-of-what-dragan-gasevic-called-metacognitive-laziness-i-described-a-day-that-did-not-offer-comfort\">In my <a href=\"https://doi.org/10.59350/1bqm0-1d126\">Day 1 article</a>, I wrote that the OECD Digital Education Outlook 2026 conference documented performance gains alongside learning losses, efficiency alongside declining human competence, and the emergence of what Dragan Gasevic called \u201cmetacognitive laziness.\u201d I described a day that did not offer comfort.</p>\n\n\n\n<p>Where the first day established the tension between performance and learning, the second day forced the question of what to do about it. Nine sessions brought practitioners, researchers, young people, AI companies, and policymakers face to face with the growing evidence that generative AI in education is producing a widening gap between what students can do with AI and what they understand without it. The most striking contribution came not from a professor or a minister but from Beatriz Moutinho, a young woman from Cabo Verde, who said: \u201cI am very worried about AI replacing young people in the job market. But I am even more worried about young people preemptively replacing themselves.\u201d</p>\n\n\n\n<p>That sentence reframed the entire day: what happens when people become indistinguishable from the AI itself?</p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-self-replacement-risk-young-people-see-what-adults-are-slow-to-name\">Self-replacement risk: Young people see what adults are slow to name</h2>\n\n\n\n<p>Beatriz Moutinho, moderating and speaking in the youth session, articulated risks that the research sessions had danced around. She described an escalation pattern: students begin by using AI for discrete tasks, progress to using it for structuring their thinking, and eventually use it to form opinions and make personal decisions. \u201cWe are giving our first drafts of our first thoughts in our brain directly to AI before even fully structuring them,\u201d she said.</p>\n\n\n\n<p>Her concept of \u201cself-replacement\u201d was the most original intellectual contribution of the day. It is not that AI will take young people\u2019s jobs. It is that young people will preemptively delegate the formation of their own professional voice to AI, producing homogenised output that makes them indistinguishable from the machine. \u201cThis loss of differentiation might be something to look out for,\u201d Moutinho said, \u201cespecially in the job market.\u201d</p>\n\n\n\n<p>She also identified what she called a \u201cflipped AI divide\u201d: wealthier students retain access to human support while lower-income students become increasingly reliant on AI alone. This inverts the optimistic narrative of AI as an equaliser.</p>\n\n\n\n<p>Elisa Lorenzini, a student from Italy, and Kenji Inoue, a student from Japan, both reported that their schools had provided no formal AI literacy instruction. Lorenzini said her teachers prohibited AI because they did not understand it. \u201cIt would be useful if teachers knew how to use it,\u201d she said, \u201cbecause maybe they can understand why it is a useful tool even for students.\u201d</p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-the-performance-learning-gap-deepens\">The performance-learning gap deepens</h2>\n\n\n\n<p>The central finding of the OECD Digital Education Outlook 2026, presented as a keynote by lead editor Stephan Vincent-Lancrin, is blunt. General-purpose generative AI tools reliably improve short-term task performance but do not reliably produce learning gains. The mechanism is metacognitive laziness: when AI produces fluent, confident output, learners stop monitoring their own thinking.</p>\n\n\n\n<p>Vincent-Lancrin reported that high school and vocational students in several countries approach 80 percent usage rates for generative AI. He described a study in which students using ChatGPT for homework scored zero additional points on a subsequent knowledge test. \u201cOur traditional education model assumes that if we perform better, then that means we have the knowledge and skills,\u201d he said. \u201cWhich is very problematic.\u201d</p>\n\n\n\n<p>Dragan Gasevic, presenting in the assessment session, provided the sharpest experimental evidence. A randomised controlled trial lasting nearly a full semester with medical students showed that those given immediate AI access performed no better than the AI working alone. Only students who developed their clinical reasoning skills before AI was introduced achieved genuine human-AI synergy. \u201cHybrid intelligence is not that you just automate a task to AI,\u201d Gasevic said. \u201cIf your ability is completely automated, that means you are obsolete as well yourself.\u201d</p>\n\n\n\n<p>Inge Molenaar of Radboud University explained the mechanism. The fluency of AI output suppresses the metacognitive cues that normally trigger critical evaluation. \u201cThe metacognitive cues that generative AI responses give to humans do not allow us to engage or do not trigger us to engage in critical evaluation and in learning activities,\u201d she said. \u201cIt increases the chance of accepting it and moving backwards.\u201d </p>\n\n\n\n<p>The zone of proximal development collapses: AI output is often beyond what a student can process, and instead of scaffolding learning, it replaces it.</p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-practitioners-redesign-everything-from-scratch\">Practitioners redesign everything from scratch</h2>\n\n\n\n<p>If <a href=\"https://redasadki.me/2026/03/24/oecd-digital-education-outlook-2026-how-can-ai-help-human-beings-learn-and-grow/\" type=\"post\" id=\"23232\">Day 1 established the theory</a>, Day 2 showed the practice. The opening session brought teachers from Iceland, England, and India who are living with AI in their classrooms every day.</p>\n\n\n\n<p>Frida Gylfadottir and Tinna Osp Arnardottir, from a secondary school in Gardabae, Iceland, described a national pilot involving 255 teachers across 31 schools. They have redesigned assessment so that written essays count for only 20 percent of the grade, with oral draft interviews and oral defences making up the rest. \u201cIf they have not written the essay, if the text is written by AI, it is really difficult for them to point out where the thesis statement is located or the topic sentences,\u201d Gylfadottir said. \u201cThey cannot fake it.\u201d</p>\n\n\n\n<p>Christian Turton of the Chiltern Learning Trust in England was equally direct. \u201cEvery assignment and every test, every task we used to rely on has to be rethrown from scratch,\u201d he said. Turton introduced the concept of \u201cdigital metacognition,\u201d thinking about where the thinking happens when using AI. He also reported that his trust trialled AI marking tools and found the error rate unacceptable.</p>\n\n\n\n<p>Souptik Pal of the Learning Links Foundation in India described classrooms of 100 students where differentiation without AI is nearly impossible. After two-day teacher training sessions, the majority of trained teachers began using AI for daily lesson planning. But Pal emphasised that the biggest barrier is not technical. It is attitudinal. \u201cThe most important challenge is coming with the mindset that AI will replace the teachers,\u201d he said.</p>\n\n\n\n<p>Gylfadottir captured a practitioner reality in one sentence: \u201cThe truth is right now we are spending more time, not less.\u201d</p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-bricolage-assessment-must-change-but-the-evidence-base-is-dangerously-thin\">Bricolage: assessment must change, but the evidence base is dangerously thin</h2>\n\n\n\n<p>Ryan Baker proposed \u201cinvigilation on an audit basis\u201d as one way forward. Let students use AI to produce artefacts, but periodically ask them to explain their work without the technology present. \u201cIf they cannot talk about it, then they do not really understand it,\u201d he said. Nikol Rummel described a collaborative approach in which students using different AI prompts must reconcile divergent outputs, creating what she called the \u201cIKEA effect,\u201d ownership through effortful engagement <em>bricolage</em>.</p>\n\n\n\n<p>Gasevic pushed further, arguing for two parallel assessment streams: one measuring standalone human skills, and another measuring human-AI synergy. He reported that LLM-based analysis of process data, including chat logs and keystroke patterns, already achieves approximately 80 percent of expert-quality results, making scalable process assessment technically feasible.</p>\n\n\n\n<p>But behind these proposals sits an uncomfortable truth that Isabelle Hau of the <a href=\"http://stanford accelerator for learning\">Stanford Accelerator for Learning</a> made explicit in the safety session. Her systematic review found only 22 causal-quality studies on AI and learning. No longitudinal data exist. \u201cWe are currently running a massive uncontrolled experiment on our children,\u201d said Stephie Herlin of KORA, \u201cand you cannot improve what you do not measure.\u201d KORA has benchmarked more than 30 AI models. Closed-source models average 49 percent on child safety scores. Open-source models average 25 percent. Seven models score zero.</p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-ai-literacy-as-everyone-s-responsibility-means-it-is-nobody-s-responsibility\">AI literacy as everyone\u2019s responsibility means it is nobody\u2019s responsibility</h2>\n\n\n\n<p>The AI literacy session, moderated by Laura Lindberg of European Schoolnet, revealed a paradox that Daniela Hau of Luxembourg\u2019s Ministry of Education stated plainly: \u201cIf we say everybody, we risk saying nobody.\u201d</p>\n\n\n\n<p>The <a href=\"https://ailiteracyframework.org\">EC-OECD AI Literacy Framework</a> defines 22 competences across four domains. Mario Piacentini of the OECD described how this framework will be translated into a PISA 2029 assessment. Simona Petkova of the European Commission reported that young people in Europe are twice as likely to use generative AI as the general population, yet three out of four teachers do not feel well prepared to address AI in the classroom. Teachers are estimated to be more exposed to AI than 90 percent of workers across the EU.</p>\n\n\n\n<p>The most significant empirical contribution came from Lixiang Yan of Tsinghua University, who presented a national study of nearly 2.4 million Chinese vocational students. Yan found that institutional AI readiness only improves student AI literacy when it runs through teachers who have developed genuine instructional competence with AI. \u201cThe teacher is the indispensable engine in this transformation,\u201d Yan said. General attitudinal acceptance is not enough. The system must build collective instructional capability.</p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-ai-in-research-is-already-everywhere-and-the-risks-mirror-education\">AI in research is already everywhere, and the risks mirror education</h2>\n\n\n\n<p>Dominique Guellec of the University of Strasbourg documented the penetration of AI in scientific research: from 2 percent of publications in 2015 to 8 percent in 2022, and approaching two-thirds of all researchers using AI by 2025. He described AI as no longer a tool but part of the infrastructure of doing research. \u201cThere is a risk on the human side to over-rely on AI, especially when it does the writing for you,\u201d Guellec said. \u201cWriting is also a part of thinking.\u201d</p>\n\n\n\n<p>In a moment that captured the pace of change more vividly than any statistic, Guellec acknowledged on stage that sections of his own OECD Digital Education Outlook 2026 chapter were already outdated. \u201cWhat I put in the slide, which is that AI does not yet do research-level mathematics, is already outdated,\u201d he said.</p>\n\n\n\n<p>Yuko Harayama of the Global Partnership on AI argued that the researcher\u2019s identity needs to shift from generating solutions to evaluating them. \u201cWhat you have to re-explore and re-empower will be the out-of-the-box thinking,\u201d she said, \u201cnot just following and becoming dependent on the output coming from AI.\u201d A <a href=\"https://www.science.org/doi/10.1126/science.adw3000\">study published in Science Magazine</a>, cited in the session, found homogenisation of research topics in the fields most intensive in AI use.</p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-the-equity-question-is-structural-not-peripheral\">The equity question is structural, not peripheral</h2>\n\n\n\n<p>The session on educational GenAI in low and middle-income areas, moderated by Cristobal Cobo of the World Bank, confronted a question that Day 1 raised but did not resolve: will AI close or widen the educational divide?</p>\n\n\n\n<p>Paul Atherton laid out the infrastructure gap. Children in low-income countries are up to 14 times less likely to have internet at home. But Atherton argued that the more fundamental barrier is literacy itself. \u201cIf you cannot read, you cannot access a language model that is done through reading,\u201d he said. The Matthew effect applies: those with the most capability to use AI gain the most.</p>\n\n\n\n<p>Seiji Isotani of the University of Pennsylvania presented the most compelling positive evidence. His <a href=\"https://doi.org/10.1007/978-3-031-36336-8_118\">AIED Unplugged system</a> reached more than 500,000 students across 20,000 schools in Brazil using only teacher mobile phones and printed feedback sheets. No student devices or internet were required. \u201cInstead of putting the burden on governments, we put the burden on people who develop technologies,\u201d Isotani said.</p>\n\n\n\n<p>Maria Florencia Ripani argued that language and culture are not technical parameters. \u201cLanguage is part of a certain culture,\u201d she said. \u201cIt is very important to work with user-centred design and use culturally relevant elements.\u201d She described how models in Lugandan already outperform GPT-3.5 from two years ago, despite substantial performance degradation compared to English.</p>\n\n\n\n<p>Juan-Pablo Giraldo Ospino of UNICEF delivered the most direct challenge: \u201cTeachers cannot be replaced in the education system and cannot be replaced in the way our brain develops, particularly in the early years.\u201d He warned that framing AI as a solution to teacher shortage risks exacerbating burnout, because \u201cif we increase productivity, actually we are going to make teachers work the same hours or more to be able to teach more kids.\u201d</p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-learning-science-points-toward-slow-ai\">Learning science points toward slow AI</h2>\n\n\n\n<p>The final session, on applying learning science with AI, offered the clearest design direction of the day. Ronald Beghetto of Arizona State University introduced the concept of \u201cslow AI,\u201d a deliberate counterpoint to the transactional \u201cfast AI\u201d mode in which users delegate cognitive and creative work entirely. \u201cA lot of people think creativity is just kind of unbridled originality, but really creativity is constrained originality,\u201d he said. His framework asks learners to do the mental work first, then turn to AI as a provocateur or scaffold, then return to human teams.</p>\n\n\n\n<p>Dora Demszky of Stanford presented the first large-scale randomised controlled trial of automated feedback in physical classrooms. Teachers using her TeachFX platform received real-time feedback on their use of focusing questions, and the behaviour increased by 15 to 20 percent. But she also noted a structural problem: \u201cOne of the issues with machine learning systems is that they are trained to say what you want to hear rather than adding the productive friction that is necessary for learning.\u201d Sycophancy in large language models is not a bug. It is a design feature that undermines learning.</p>\n\n\n\n<p>Nikol Rummel and Sebastian Strauss presented a systematic review of GenAI in collaborative learning that found only two experimental studies measuring domain-specific knowledge outcomes. The evidence base for one of the most-discussed applications of AI in education barely exists.</p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-beyond-k-12-what-oecd-digital-education-outlook-s-dialogue-means-for-humanitarian-and-health-systems\">Beyond K-12: what OECD Digital Education Outlook\u2019s dialogue means for humanitarian and health systems</h2>\n\n\n\n<p>The OECD conference focused on schools. But every finding from Day 2 reaches into the world I work in, where health workers and humanitarian practitioners learn from each other across more than 130 countries in the peer learning networks coordinated by The Geneva Learning Foundation.</p>\n\n\n\n<p>The Day 1 article mapped three implications. Day 2 deepened each of them and surfaces new ones.</p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-self-replacement-is-already-happening-in-global-health\">Self-replacement is already happening in global health</h3>\n\n\n\n<p>Moutinho\u2019s concept of self-replacement is not speculative in our context. It describes what I have already observed. In our Teach to Reach programmes, <a href=\"https://redasadki.me/2025/03/09/artificial-intelligence-accountability-and-authenticity-knowledge-production-and-power-in-global-health-crisis/\" type=\"post\" id=\"20803\">highly committed health workers have begun submitting narratives that clearly bear the mark of generative AI</a>. They are not cheating. They are doing what every professional does when a tool appears that can produce faster, more polished output. But the result is a loss of the situated, experiential knowledge that makes their contributions irreplaceable.</p>\n\n\n\n<p>I wrote about this as the \u201ctransparency paradox\u201d in my work on AI, accountability, and authenticity in global health. If a health worker discloses AI use, their work is devalued as inauthentic. If they conceal it, they carry the ethical tension alone. </p>\n\n\n\n<p>Moutinho\u2019s framing adds a dimension I had not fully articulated: the risk is not only institutional but developmental. When practitioners delegate the act of writing about their own experience to AI, they may lose the capacity to recognise what they know that AI does not.</p>\n\n\n\n<p>In crisis contexts, this is not an abstraction. A health worker who cannot articulate the reasoning behind a vaccination micro-plan, because the writing was done by a chatbot and the thinking was never fully formed, is a health worker less able to adapt when the plan meets reality on the ground.</p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-the-evidence-gap-is-wider-in-global-health-than-in-k-12\">The evidence gap is wider in global health than in K-12</h3>\n\n\n\n<p>Isabelle Hau\u2019s finding that only 22 causal-quality studies on AI and learning exist is alarming for education. In global health and humanitarian response, the number is effectively zero. AI tools are being deployed to support health worker training, translate guidance, and even generate response protocols, but I am not aware of a single randomised controlled trial measuring whether these tools produce genuine learning gains among health professionals in low-resource settings.</p>\n\n\n\n<p>Gasevic\u2019s finding that students given immediate AI access performed no better than AI alone has a direct analogue. If a health worker uses a general-purpose chatbot to draft an outbreak response protocol without first developing the clinical reasoning that the protocol requires, the output may be fluent and authoritative while the human understanding behind it is empty. In K-12, this undermines learning. In health systems and in humanitarian response, it can cost lives.</p>\n\n\n\n<p>At The Geneva Learning Foundation, we introduced our first AI co-worker, <a href=\"https://redasadki.me/2026/03/13/introducing-claude-cardot-our-first-ai-co-worker-to-support-frontline-health-and-humanitarian-leaders/\" type=\"post\" id=\"23130\">Claude Cardot</a>, in March 2026, deliberately naming and governing the role. We are treating Claude\u2019s onboarding as a structured experiment, asking in public whether an AI co-worker can reduce the cognitive load on a small team without diluting authenticity or erasing local voice. But we are under no illusion that this is anything other than a design question that the evidence base cannot yet answer.</p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-the-flipped-ai-divide-is-the-central-equity-problem-for-global-health\">The flipped AI divide is the central equity problem for global health</h3>\n\n\n\n<p>Moutinho\u2019s \u201cflipped AI divide\u201d is the most precise description I have encountered of the equity challenge in global health AI. In the countries where The Geneva Learning Foundation works, access to advanced models is already limited by geofencing, pricing, and risk aversion by international organisations. When practitioners in these settings do use AI, they use general-purpose chatbots without pedagogical intent, institutional support, or safety standards. This is exactly the configuration that the OECD evidence shows produces performance gains without learning gains.</p>\n\n\n\n<p>Meanwhile, organisations in Geneva, New York, and Washington have access to purpose-built AI tools, teams of data scientists, and legal departments that can negotiate safety standards. The result is that the most resource-rich actors get AI that is designed to support human capability, while the practitioners who face the most severe challenges get AI that is designed for consumer engagement. This is the flipped AI divide in global health.</p>\n\n\n\n<p>Isotani\u2019s AIED Unplugged model offers a counterpoint that speaks directly to our work. His system proves that it is possible to design AI for resource-constrained environments at national scale, reaching half a million students with no student devices and no classroom internet. If it is possible in Brazilian public schools, it is possible in the health systems where we work. The design principle is the same one we apply at The Geneva Learning Foundation: the burden of adaptation must fall on technology designers, not on the practitioners and communities who are often already stretched to their limits.</p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-peer-learning-is-the-missing-architecture\">Peer learning is the missing architecture</h3>\n\n\n\n<p>Across two days of the OECD conference, one word barely appeared: peers. The conference discussed teachers, students, researchers, companies, and policymakers. It discussed tutoring, assessment, safety, and governance. What it did not discuss, with rare exceptions, was what happens when learners support each other, becoming both teachers and learners.</p>\n\n\n\n<p>This is the gap that our work fills. In the <a href=\"https://redasadki.me/2025/06/17/when-funding-shrinks-impact-must-grow-the-economic-case-for-peer-learning-networks/\" type=\"post\" id=\"20995\">peer learning networks that The Geneva Learning Foundation has built over a decade</a>, health workers develop context-specific projects, review each other\u2019s work using structured rubrics, and engage in facilitated dialogue that surfaces patterns across thousands of contexts. We envision AI not as a tutor or an oracle but as a co-worker that helps with tasks that peers have neither time nor bandwidth to perform at scale.</p>\n\n\n\n<p>Gasevic\u2019s experimental finding confirms the design logic we have been following. Students who developed their skills before AI was introduced achieved genuine synergy. In our networks, practitioners build their capacities through structured peer interaction before AI enters the picture. The human architecture comes first. AI amplifies and augments what the network has already built. Its boundaries are defined by the network.</p>\n\n\n\n<p>Beghetto\u2019s \u201cslow AI\u201d resonates with this approach. In a peer learning network, the \u201cproductive friction\u201d that commercial AI removes is precisely what the network is designed to generate. Peer review, facilitated dialogue, and iterative project development are all forms of friction that produce learning. If we strip these out and replace them with chatbot-generated feedback, we lose what makes the system work.</p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-a-leadership-agenda-for-day-2\">A leadership agenda for Day 2</h2>\n\n\n\n<p>Day 1 produced a leadership agenda focused on the performance-learning distinction, the need for pedagogy before technology, and the urgency of equity. Day 2 extends it.</p>\n\n\n\n<p>First, leaders must confront the self-replacement problem directly. Moutinho described it in young people. I see it in health and humanitarian professionals. The response is not to ban AI or to ignore it, but to create conditions in which practitioners can use AI openly and with pedagogical intent. This means moving from \u201cshadow AI\u201d to governed AI, as we are doing with Claude Cardot. It also means designing learning experiences that require practitioners to do the cognitive work before AI enters, not after.</p>\n\n\n\n<p>Second, leaders must demand evidence. Twenty-two causal studies is not a sufficient foundation for policy. In global health and humanitarian response, where the evidence base is even thinner, leaders should insist that any AI deployment in training or capacity-building includes a credible evaluation design. Efficiency gains are not learning gains. The two must be measured separately.</p>\n\n\n\n<p>Third, leaders must resist the flipped AI divide. If the most resource-constrained practitioners end up with unguided access to general-purpose chatbots while the most resource-rich organisations get purpose-built, safety-tested, pedagogy-driven AI tools, the result will be a deepening of the inequity that <a href=\"https://redasadki.me/2025/07/16/why-peer-learning-is-critical-to-survive-the-age-of-artificial-intelligence/\" type=\"link\" id=\"https://redasadki.me/2025/07/16/why-peer-learning-is-critical-to-survive-the-age-of-artificial-intelligence/\">peer learning networks are designed to overcome</a>. The Isotani model shows that another path is possible. Leaders should demand it.</p>\n\n\n\n<p>Fourth, leaders must invest in peer learning infrastructure alongside AI deployment. Every finding from the OECD conference confirms that AI is most powerful when embedded in human systems that provide the friction, the context, and the accountability that AI alone cannot supply. Peer learning networks are not optional. They are the architecture that determines whether AI amplifies human capability or replaces it.</p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-what-the-second-day-left-unresolved\">What the second day left unresolved</h2>\n\n\n\n<p>The second day of the OECD conference did not resolve the question that Moutinho raised. It sharpened it. If young people are preemptively replacing themselves, and if health workers in crisis settings are quietly delegating their situated knowledge to machines, then the question is not whether AI can help human beings learn and grow. It is whether we will design the systems that make that possible before the window closes.</p>\n\n\n\n<p>Guellec\u2019s observation that his own OECD chapter was outdated before the conference took place is not only a comment about the pace of change in AI. It is a warning about the pace of change required in every institution that claims to support learning. The evidence is now clear that doing nothing, or doing the wrong thing, is not neutral. It is actively harmful. And the people most at risk are, as always, those with the least institutional support and the most to lose.</p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-references\">References</h2>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Isotani S, Bittencourt II, Challco GC, Dermeval D, Mello RF. AIED Unplugged: Leapfrogging the Digital Divide to Reach the Underserved. In: Wang N, Rebolledo-Mendez G, Dimitrova V, Matsuda N, Santos OC, editors. Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners, Doctoral Consortium and Blue Sky. Cham: Springer Nature Switzerland; 2023. p. 772\u20139. (Communications in Computer and Information Science). <a href=\"https://doi.org/10.1007/978-3-031-36336-8_118\">https://doi.org/10.1007/978-3-031-36336-8_118</a></li>\n\n\n\n<li>Kusumegi K, Yang X, Ginsparg P, De Vaan M, Stuart T, Yin Y. Scientific production in the era of large language models. Science. 2025 Dec 18;390(6779):1240\u20133. <a href=\"https://doi.org/10.1126/science.adw3000\">https://doi.org/10.1126/science.adw3000</a></li>\n\n\n\n<li>OECD. OECD Digital Education Outlook 2026: Exploring Effective Uses of Generative AI in Education. OECD Publishing, 2026. <a href=\"https://doi.org/10.1787/062a7394-en\">https://doi.org/10.1787/062a7394-en</a>.</li>\n\n\n\n<li>Reda Sadki (2025). The great unlearning: notes on the Empower Learners for the Age of AI conference. Reda Sadki: Learning to make a difference. <a href=\"https://doi.org/10.59350/859ed-e8148\">https://doi.org/10.59350/859ed-e8148</a></li>\n\n\n\n<li>Reda Sadki (2025). Artificial intelligence, accountability, and authenticity: knowledge production and power in global health crisis. Reda Sadki: Learning to make a difference. <a href=\"https://doi.org/10.59350/w1ydf-gd85\">https://doi.org/10.59350/w1ydf-gd85</a></li>\n\n\n\n<li>Reda Sadki (2025). When funding shrinks, impact must grow: the economic case for peer learning networks. Reda Sadki: Learning to make a difference. <a href=\"https://doi.org/10.59350/redasadki.20995\">https://doi.org/10.59350/redasadki.20995</a></li>\n\n\n\n<li>Reda Sadki (2025). Why peer learning is critical to survive the Age of Artificial Intelligence. Reda Sadki: Learning to make a difference. <a href=\"https://doi.org/10.59350/redasadki.21123\">https://doi.org/10.59350/redasadki.21123</a></li>\n\n\n\n<li>Reda Sadki (2026). Introducing Claude Cardot, our first AI co-worker to support frontline health and humanitarian leaders. Reda Sadki: Learning to make a difference. <a href=\"https://doi.org/10.59350/6rjnm-1rd08\">https://doi.org/10.59350/6rjnm-1rd08</a></li>\n\n\n\n<li>Reda Sadki (2026). OECD Digital Education Outlook 2026: How can AI help human beings learn and grow?. Reda Sadki: Learning to make a difference. <a href=\"https://doi.org/10.59350/1bqm0-1d126\">https://doi.org/10.59350/1bqm0-1d126</a></li>\n</ol>\n","doi":"https://doi.org/10.59350/skb2r-wqp57","funding_references":null,"guid":"https://redasadki.me/?p=23278","id":"5143a891-fbd6-49b1-acd3-2e152fe370af","image":"https://redasadki.me/wp-content/uploads/2026/04/OECD-Digital-Education-Outlook-2026-Day-2.jpg","indexed":true,"indexed_at":1775203670,"language":"en","parent_doi":null,"published_at":1775203058,"reference":[{"id":"https://doi.org/10.1007/978-3-031-36336-8_118","unstructured":"Isotani S, Bittencourt II, Challco GC, Dermeval D, Mello RF. AIED Unplugged: Leapfrogging the Digital Divide to Reach the Underserved. In: Wang N, Rebolledo-Mendez G, Dimitrova V, Matsuda N, Santos OC, editors. Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners, Doctoral Consortium and Blue Sky. Cham: Springer Nature Switzerland; 2023. p. 772\u20139. (Communications in Computer and Information Science). https://doi.org/10.1007/978-3-031-36336-8_118"},{"id":"https://doi.org/10.1126/science.adw3000","unstructured":"Kusumegi K, Yang X, Ginsparg P, De Vaan M, Stuart T, Yin Y. Scientific production in the era of large language models. Science. 2025 Dec 18;390(6779):1240\u20133. https://doi.org/10.1126/science.adw3000"},{"id":"https://doi.org/10.1787/062a7394-en","unstructured":"OECD. OECD Digital Education Outlook 2026: Exploring Effective Uses of Generative AI in Education. OECD Publishing, 2026. https://doi.org/10.1787/062a7394-en."},{"id":"https://doi.org/10.59350/859ed-e8148","unstructured":"Reda Sadki (2025). The great unlearning: notes on the Empower Learners for the Age of AI conference. Reda Sadki: Learning to make a difference. https://doi.org/10.59350/859ed-e8148"},{"id":"https://doi.org/10.59350/w1ydf-gd85","unstructured":"Reda Sadki (2025). Artificial intelligence, accountability, and authenticity: knowledge production and power in global health crisis. Reda Sadki: Learning to make a difference. https://doi.org/10.59350/w1ydf-gd85"},{"id":"https://doi.org/10.59350/redasadki.20995","unstructured":"Reda Sadki (2025). When funding shrinks, impact must grow: the economic case for peer learning networks. Reda Sadki: Learning to make a difference. https://doi.org/10.59350/redasadki.20995"},{"id":"https://doi.org/10.59350/redasadki.21123","unstructured":"Reda Sadki (2025). Why peer learning is critical to survive the Age of Artificial Intelligence. Reda Sadki: Learning to make a difference. https://doi.org/10.59350/redasadki.21123"},{"id":"https://doi.org/10.59350/6rjnm-1rd08","unstructured":"Reda Sadki (2026). Introducing Claude Cardot, our first AI co-worker to support frontline health and humanitarian leaders. Reda Sadki: Learning to make a difference. https://doi.org/10.59350/6rjnm-1rd08"},{"id":"https://doi.org/10.59350/1bqm0-1d126","unstructured":"Reda Sadki (2026). OECD Digital Education Outlook 2026: How can AI help human beings learn and grow?. Reda Sadki: Learning to make a difference. https://doi.org/10.59350/1bqm0-1d126"}],"registered_at":0,"relationships":[],"rid":"7w31z-37595","status":"active","summary":"In my Day 1 article, I wrote that the OECD Digital Education Outlook 2026 conference documented performance gains alongside learning losses, efficiency alongside declining human competence, and the emergence of what Dragan Gasevic called \u201cmetacognitive laziness.\u201d I described a day that did not offer comfort.  Where the first day established the tension between performance and learning, the second day forced the question of what to do about it.","tags":["Artificial Intelligence","AI4Health","Andreas Schleicher","Empower Learners For The Age Of AI","George Siemens"],"title":"AI self-replacement: what happens when we delegate our thoughts to artificial intelligence?","updated_at":1775203258,"url":"https://redasadki.me/2026/04/03/ai-self-replacement-what-happens-when-we-delegate-our-thoughts-to-artificial-intelligence/","version":"v1"},{"abstract":null,"archive_url":null,"authors":[{"affiliation":[{"id":"https://ror.org/013meh722","name":"University of Cambridge"}],"contributor_roles":[],"family":"Madhavapeddy","given":"Anil","url":"https://orcid.org/0000-0001-8954-2428"}],"blog":{"archive_collection":null,"archive_host":null,"archive_prefix":null,"archive_timestamps":null,"authors":null,"canonical_url":null,"category":"computerAndInformationSciences","community_id":"472a49be-dc61-4a17-97f0-d1ff17b0dadd","created_at":1760341563.110877,"current_feed_url":null,"description":null,"doi_as_guid":false,"favicon":"https://anil.recoil.org/assets/favicon.ico","feed_format":"application/feed+json","feed_url":"https://anil.recoil.org/perma.json","filter":null,"funding":null,"generator":"Other","generator_raw":"Other","home_page_url":"https://anil.recoil.org/notes","id":"1436e2f2-fbbf-4741-897f-5198070c7195","indexed":true,"issn":null,"language":"en","license":"https://creativecommons.org/licenses/by/4.0/legalcode","mastodon":null,"prefix":"10.59350","registered_at":0,"relative_url":null,"ror":null,"secure":true,"slug":"anil","status":"active","subfield":"1702","subfield_validated":null,"title":"Anil Madhavapeddy's feed","updated_at":1775288909.344299,"use_api":null,"use_mastodon":false,"user_id":null},"blog_name":"Anil Madhavapeddy's feed","blog_slug":"anil","content_html":"<p>After my <a href=\"https://anil.recoil.org/notes/aoah-2025\">December of agentic coding</a> sprint, I was left quite\n<a href=\"https://marvinh.dev/blog/ddosing-the-human-brain/\">frazzled</a> but also with a\npractical problem. I've got two kinds of libraries: the ones I care about (and\nhandcraft), and the wild experiments that look perfectly formed but are in fact just\n(well typed) slop. After <a href=\"https://anil.recoil.org/notes/claude-copilot-sandbox\">a year</a> of doing this, it's obvious that the <em>quality</em> of generated code also varies dramatically as\nmodels steadily improve and agentic harnesses improve context management.</p>\n<p>This post is about an <strong><a href=\"https://github.com/avsm/ocaml-ai-disclosure\">ocaml-ai-disclosure proposal</a></strong> I put together to help track this in OCaml using metadata and <a href=\"https://ocaml.org/manual/5.3/attributes.html\">extension attributes</a> in source code.</p>\n<h2 id=\"the-eu-is-mandating-what-this-summer\"><a aria-hidden=\"true\" class=\"anchor\" href=\"https://anil.recoil.org/notes/opam-ai-disclosure/#the-eu-is-mandating-what-this-summer\"></a>The EU is mandating what this summer?!</h2>\n<p>Toby Jaffey pointed\nme to the <a href=\"https://www.w3.org/community/ai-content-disclosure/\">W3C AI Content Disclosure</a>\n<a href=\"https://anil.recoil.org/notes/2026w13\">last week</a>. The bit that\nproperly surprised me was a legal snippet buried in their README:</p>\n<blockquote>\n<p>The EU AI Act Article 50 (effective August 2026) requires that AI-generated text content be \"marked in a machine-readable format and detectable as artificially generated or manipulated.\"\n<cite>-- <a href=\"https://github.com/dweekly/ai-content-disclosure?tab=readme-ov-file\">ai-content-disclosure</a>, David E. Weekly, 2026</cite></p>\n</blockquote>\n<p>This summer!!! Whether source code falls under \"text content\" is an <a href=\"https://eur-lex.europa.eu/eli/reg/2024/1689/oj\">open\nquestion</a> that hasn't been\naddressed in existing legal commentary as far as I can tell (nor can I read the\nraw 300+ pages to figure it out for myself).  However, regardless of how lawyers eventually\nparse this, voluntary disclosure for code seems like a sensible thing to do anyway.</p>\n<p>I've therefore put together an <strong><a href=\"https://github.com/avsm/ocaml-ai-disclosure\">ocaml-ai-disclosure</a></strong> repository contains a draft specification and OCaml reference tooling for voluntary, machine-readable AI content disclosure in OCaml code. I'm interested in both thoughts from the OCaml community but also from other language ecosystems. Weirdly, I can't find a single other programming language that's proposed anything for source code after some searching.</p>\n<p><a href=\"https://eur-lex.europa.eu/eli/reg/2024/1689/oj\"> <img alt=\"%c\" src=\"https://anil.recoil.org/images/eu-ai-act-1.webp\" title=\"Not even reading the AI Act in my mothertongue shed light on the matter. (Ok ok, it's about laying down harmonised rules on AI and amending existing Regulations)\"/> </a></p>\n<h2 id=\"ai-disclosure-for-ocaml-is-pretty-easy\"><a aria-hidden=\"true\" class=\"anchor\" href=\"https://anil.recoil.org/notes/opam-ai-disclosure/#ai-disclosure-for-ocaml-is-pretty-easy\"></a>AI Disclosure for OCaml is pretty easy</h2>\n<p>The OCaml ecosystem's accumulating code with varying degrees of AI involvement, but currently no machine-readable way to signal it. We obviously need to be very careful about how we mix this code into the <a href=\"https://github.com/ocaml/opam-repository\">commons</a>, because the usual social signals we use to review packages are basically useless now.</p>\n<p>However a binary AI \"yes/no\" flag doesn't capture the reality of how people actually work with these tools. The code I wrote during <a href=\"https://anil.recoil.org/notes/aoah-2025\">AoAH</a> ranged from a one-shot <em>\"CC generated the whole module from a one-line prompt\"</em> to <em>\"I wrote the core logic by hand and Claude sorted the pretty-printer boilerplate\"</em> or even <em>\"<a href=\"https://toao.com/blog/check-with-gemini\">I got CC to test with Gemini</a>\"</em>.</p>\n<p>My proposal is extremely simple, here's how it works...</p>\n<h3 id=\"package-disclosures\"><a aria-hidden=\"true\" class=\"anchor\" href=\"https://anil.recoil.org/notes/opam-ai-disclosure/#package-disclosures\"></a>Package Disclosures</h3>\n<p>An opam package can declare its disclosure using extension fields:</p>\n<pre><code>x-ai-disclosure: \"ai-assisted\"\nx-ai-model: \"claude-opus-4-6\"\nx-ai-provider: \"Anthropic\"\n</code></pre>\n<p>Note: This may just become a list of values in the final proposal, but you get the idea.</p>\n<h3 id=\"ocaml-module-level\"><a aria-hidden=\"true\" class=\"anchor\" href=\"https://anil.recoil.org/notes/opam-ai-disclosure/#ocaml-module-level\"></a>OCaml Module level</h3>\n<p>OCaml supports extension attributes, which we use via a floating attribute that applies to the entire compilation unit:</p>\n<pre><code class=\"language-ocaml\">[@@@ai_disclosure \"ai-generated\"]\n[@@@ai_model \"claude-opus-4-6\"]\n[@@@ai_provider \"Anthropic\"]\n\nlet foo = ...\nlet bar = ...\n</code></pre>\n<p>These can also be scoped more finely via declaration attributes that apply to a single binding:</p>\n<pre><code class=\"language-ocaml\">[@@@ai_disclosure \"ai-assisted\"]\n\nlet human_written x = ...\n\nlet ai_helper y =\n  ...\n[@@ai_disclosure \"ai-generated\"]\n</code></pre>\n<p>Disclosure follows a nearest-ancestor inheritance model like the W3C HTML proposal, whereby an explicit annotation overrides the inherited value.</p>\n<p>One detail I'm quite pleased with is that <code>.mli</code> and <code>.ml</code> files are annotated independently, which means that one workflow I use quite a bit of writing the interface files first can be tracked separately from the implementations themselves.</p>\n<h3 id=\"the-disclosure-vocabulary\"><a aria-hidden=\"true\" class=\"anchor\" href=\"https://anil.recoil.org/notes/opam-ai-disclosure/#the-disclosure-vocabulary\"></a>The disclosure vocabulary</h3>\n<p>I use the same four levels as the W3C vocabulary, which works well enough for HTML:</p>\n<div role=\"region\"><table>\n<tr>\n<th>Value</th>\n<th>Meaning</th>\n</tr>\n<tr>\n<td><code>none</code></td>\n<td>No AI involvement</td>\n</tr>\n<tr>\n<td><code>ai-assisted</code></td>\n<td>Human-authored, AI edited or refined</td>\n</tr>\n<tr>\n<td><code>ai-generated</code></td>\n<td>AI-generated with human prompting and review</td>\n</tr>\n<tr>\n<td><code>autonomous</code></td>\n<td>AI-generated without human oversight</td>\n</tr>\n</table></div><p>I treat the absence of annotation as \"unknown\", not \"none\". The <code>none</code> value exists for authors who <em>want</em> to positively assert human authorship, perhaps because their project's policy requires it or because they want reviewers to know this particular module was deliberately hand-written. Tools may also choose to spelunk back through pre-2022 code and add <code>none</code> automatically where it's obvious.</p>\n<p>If a module contains both human-written and AI-generated bits, you can annotate\nat the package level and add overrides directly in code.  OCaml's module system\nand attributes gives us a natural hierarchy for this.</p>\n<h3 id=\"model-provenance\"><a aria-hidden=\"true\" class=\"anchor\" href=\"https://anil.recoil.org/notes/opam-ai-disclosure/#model-provenance\"></a>Model provenance</h3>\n<p>Each annotation can also optionally carry provenance metadata:</p>\n<ul>\n<li><code>ai_model</code> (the API model identifier, like <code>claude-opus-4-6</code> or <code>gpt-4o</code>)</li>\n<li><code>ai_provider</code> (like <code>Anthropic</code> or <code>OpenAI</code>).</li>\n</ul>\n<p><a href=\"https://mynameismwd.org\">Michael Dales</a> pointed out it's quite common to use multiple models (e.g. to cross\ntest), so these attributes can be repeated when multiple models contributed.</p>\n<h2 id=\"the-programmer-burden-is-minimal\"><a aria-hidden=\"true\" class=\"anchor\" href=\"https://anil.recoil.org/notes/opam-ai-disclosure/#the-programmer-burden-is-minimal\"></a>The programmer burden is minimal</h2>\n<p>The nice thing about this proposal is that there's <em>no</em> overhead to a programmer that chooses not to use AI assistance.</p>\n<p>For those that do, I've got a <a href=\"https://github.com/avsm/ocaml-claude-marketplace/blob/main/plugins/ocaml-dev/skills/ai-disclosure/SKILL.md\">Claude Skill ocaml-dev:ai-disclosure</a>\nthat instructs the agent to add the right annotations in.  So when Claude\ngenerates OCaml code in my sessions, it now inserts the attributes and also\nmaintains the <code>.opam.template</code> files.</p>\n<p>During code review, I read the AI-generated code and edit away to (hopefully) improve it, and downgrade <code>ai-generated</code> to <code>ai-assisted</code> on the way.  If I've substantially rewritten the code then I just remove the annotation and fully claim it.</p>\n<p>The key principle is that disclosure reflects the <em>current state of the code</em> to make it easier for a human to claim responsibility. A human who has thoroughly reviewed, understood, and rewritten a piece of code may reasonably call it their own. This is not my legal opinion, just a moral, informal and pragmatic one!</p>\n<h2 id=\"what-this-isnt\"><a aria-hidden=\"true\" class=\"anchor\" href=\"https://anil.recoil.org/notes/opam-ai-disclosure/#what-this-isnt\"></a>What this isn't</h2>\n<p>A few things worth being explicit about after discussions around <a href=\"https://anil.recoil.org/projects/oxcaml\">my group</a> on the matter:</p>\n<ul>\n<li>\n<p>It's not a judgement on whether AI code is good or bad. The goal is a transparent, machine-readable signal so that consumers of the code (be they humans, puppies, licence checkers, package managers, CI systems, whatever) can apply their own policies.</p>\n</li>\n<li>\n<p>We don't use git for this. A human may commit AI-generated code, or an AI agent may commit code that was human-reviewed and hacked and slashed enough to be considered rewritten before the commit. Rebases and squash also destroy attribution based on commits. Source-level attributes survive all these operations.</p>\n</li>\n<li>\n<p>It's not mandatory. The whole point is voluntary adoption. I have noticed a vague reluctance from the people I've talked to to declare, as they'll feel they're being judged. If the OCaml community decides this is useful, adoption will happen naturally. If not, then it'll just be me using it and I'm fine with that!</p>\n</li>\n</ul>\n<h2 id=\"whats-next\"><a aria-hidden=\"true\" class=\"anchor\" href=\"https://anil.recoil.org/notes/opam-ai-disclosure/#whats-next\"></a>What's next</h2>\n<p>I'm starting by integrating this into my own <a href=\"https://anil.recoil.org/notes/aoah-2025\">libraries</a> as a test bed. The Claude Code <a href=\"https://github.com/avsm/ocaml-claude-marketplace\">marketplace skill</a> is already available if you want to try the automated annotation in your own sessions.</p>\n<p>On the tooling side, there are several integration points I'd like to see if this idea has legs:</p>\n<ul>\n<li>odoc could render disclosure metadata alongside module documentation, perhaps using <a href=\"https://jon.recoil.org/blog/2026/03/weeknotes-2026-13.html\">the odoc plugin</a> system that <a href=\"https://jon.recoil.org\">Jon Ludlam</a> has been designing.</li>\n<li>merlin or ocaml-lsp could surface disclosure attributes in hover information in the IDE, giving you a quick 'trust signal' while reading other people's code.</li>\n<li>dune could gain native support for the <code>(ai_disclosure)</code> stanza to make the opam file generation easier.</li>\n<li>opam could eventually use disclosure fields during version solving. I think it'd be useful to have a solver constraint that prefers packages with human-reviewed code where available, and only fall back to AI if nothing else works.</li>\n</ul>\n<p>The full draft specification, FAQ, and reference implementation are at <strong><a href=\"https://github.com/avsm/ocaml-ai-disclosure\">github.com/avsm/ocaml-ai-disclosure</a></strong>.\nI'd love feedback on the spec. File issues on the repo or in the <a href=\"https://discuss.ocaml.org/t/a-proposal-for-voluntary-ai-disclosure-in-ocaml-code/17950\">OCaml Discussion thread</a>.</p><h1>References</h1><ul><li>Madhavapeddy (2026). .plan-26-13: Oxidised, standardised, and syndicated. <a href=\"https://doi.org/10.59350/ddx61-wd948\" target=\"_blank\"><i>10.59350/ddx61-wd948</i></a></li>\n<li>Madhavapeddy (2025). Oh my Claude, we need agentic copilot sandboxing right now. <a href=\"https://doi.org/10.59350/aecmt-k3h39\" target=\"_blank\"><i>10.59350/aecmt-k3h39</i></a></li></ul>","doi":"https://doi.org/10.59350/cxypn-ysv27","funding_references":null,"guid":"https://doi.org/10.59350/cxypn-ysv27","id":"ee0b8845-5954-47c8-bb7f-2f7aa1919276","image":null,"indexed":true,"indexed_at":1775238159,"language":"en","parent_doi":null,"published_at":1775174400,"reference":[{"cito":["cito:citesAsRelated"],"id":"https://doi.org/10.59350/ddx61-wd948","unstructured":" <b>[cito:citesAsRelated]</b>"},{"cito":["cito:citesAsRelated"],"id":"https://doi.org/10.59350/aecmt-k3h39","unstructured":" <b>[cito:citesAsRelated]</b>"}],"registered_at":0,"relationships":[],"rid":"a64qc-zfw45","status":"active","summary":"After my December of agentic coding sprint, I was left quite frazzled but also with a practical problem. I've got two kinds of libraries: the ones I care about (and handcraft), and the wild experiments that look perfectly formed but are in fact just (well typed) slop.","tags":["Ai","Ocaml","Oxcaml","Standards","Policy"],"title":"A Proposal for Voluntary AI Disclosure in OCaml Code","updated_at":1775174400,"url":"https://anil.recoil.org/notes/opam-ai-disclosure","version":"v1"},{"abstract":null,"archive_url":null,"authors":[{"affiliation":[{"name":"Front Matter"}],"contributor_roles":[],"family":"Fenner","given":"Martin","url":"https://orcid.org/0000-0003-1419-2405"}],"blog":{"archive_collection":22096,"archive_host":null,"archive_prefix":"https://wayback.archive-it.org/22096/20231101172748/","archive_timestamps":[20231101172748,20240501180447,20241101172601],"authors":[{"name":"Martin Fenner","url":"https://orcid.org/0000-0003-1419-2405"}],"canonical_url":null,"category":"computerAndInformationSciences","community_id":"91dd2c24-5248-4510-9c2b-30b772bf8b60","created_at":1672561153,"current_feed_url":"","description":"The Front Matter Blog covers the intersection of science and technology since 2007.","doi_as_guid":false,"favicon":"https://rogue-scholar.org/api/communities/15a362ea-8138-42b8-917f-1840a92addf8/logo","feed_format":"application/atom+xml","feed_url":"https://blog.front-matter.de/atom","filter":null,"funding":null,"generator":"Ghost","generator_raw":"Ghost 5.52","home_page_url":"https://blog.front-matter.de","id":"74659bc5-e36e-4a27-901f-f0c8d5769cb8","indexed":null,"issn":"2749-9952","language":"en","license":"https://creativecommons.org/licenses/by/4.0/legalcode","mastodon":"https://hachyderm.io/@mfenner","prefix":"10.53731","registered_at":1729685319,"relative_url":null,"ror":null,"secure":true,"slug":"front_matter","status":"active","subfield":"1710","subfield_validated":null,"title":"Front Matter","updated_at":1775288960.43165,"use_api":true,"use_mastodon":true,"user_id":"8498eaf6-8c58-4b58-bc15-27eda292b1aa"},"blog_name":"Front Matter","blog_slug":"front_matter","content_html":"<h2 id=\"blogs-added-to-rogue-scholar\">Blogs added to Rogue Scholar</h2><p>One blog was added in March. This increases the number of participating blogs (after adjusting for retired blogs) to&nbsp;<strong>186 </strong>, the number of archived posts has grown to&nbsp;<strong>49,606</strong>&nbsp;\u2013 Rogue Scholar is getting closer to the big milestones of 200 participating blogs with 50,000 posts!</p><h3 id=\"orion-dbs\"><a href=\"https://rogue-scholar.org/communities/orion\" rel=\"noreferrer\">ORION-DBs</a></h3><p><em>Library and Information Sciences, English.</em><br><a href=\"https://orion-dbs.community/blog/\">https://orion-dbs.community/blog/</a></p><p>The a backlog of new blog submissions is still not resolved, so please be patient. You can always reach out via&nbsp;<a href=\"https://join.slack.com/t/rogue-scholar/shared_invite/zt-2ylpq1yoy-o~TkxDarfz5LSMhGSCYtiA\" rel=\"noreferrer\">Slack</a>,&nbsp;<a href=\"mailto:info@rogue-scholar.org\" rel=\"noreferrer\">email</a>,&nbsp;<a href=\"https://wisskomm.social/@rogue_scholar\" rel=\"noreferrer\">Mastodon</a>, or&nbsp;<a href=\"https://bsky.app/profile/rogue-scholar.bsky.social\" rel=\"noreferrer\">Bluesky</a>&nbsp;to ask about the status of your submission.</p><h2 id=\"technical-updates\">Technical Updates</h2><p>One focus of the technical work in March was on&nbsp;infrastructure improvements. The monitoring of the Rogue Scholar infrastructure was improved by deploying a <a href=\"https://doi.org/10.53731/3w24g-cdz85\" rel=\"noreferrer\">self-hosted observability platform</a> for logs, metrics and errors with dashboards and alerting using the Grafana open source platform:</p><figure class=\"kg-card kg-image-card\"><img src=\"https://blog.front-matter.de/content/images/2026/04/image.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1600\" height=\"793\" srcset=\"https://blog.front-matter.de/content/images/size/w600/2026/04/image.png 600w, https://blog.front-matter.de/content/images/size/w1000/2026/04/image.png 1000w, https://blog.front-matter.de/content/images/2026/04/image.png 1600w\" sizes=\"(min-width: 720px) 720px\"></figure><p>Th dashboard for key metadata metrics initially released in March 2025 was improved visually and <a href=\"https://doi.org/10.53731/809xc-y7r79\" rel=\"noreferrer\">launched for communities</a>, including blog communities:</p><figure class=\"kg-card kg-image-card\"><img src=\"https://blog.front-matter.de/content/images/2026/04/image-1.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1600\" height=\"610\" srcset=\"https://blog.front-matter.de/content/images/size/w600/2026/04/image-1.png 600w, https://blog.front-matter.de/content/images/size/w1000/2026/04/image-1.png 1000w, https://blog.front-matter.de/content/images/2026/04/image-1.png 1600w\" sizes=\"(min-width: 720px) 720px\"></figure><p>This makes it much easier for readers to get an overview for each blog participating in Rogue Scholar, and for blog authors to see gaps in metadata coverage that they can improve.</p><p>This week the <a href=\"https://doi.org/10.53731/dp6ra-trw41\" rel=\"noreferrer\">blog self-management in Rogue Scholar was improved</a>, enabling blog owners to update all relevant blog metadata.</p><h2 id=\"community-updates\">Community Updates</h2><p>The technical updates mentioned above are part of an effort to align Rogue Scholar better with the <a href=\"https://inveniordm.docs.cern.ch/\" rel=\"noreferrer\">InvenioRDM repository platform</a>. This will make it easier in the long run to sustain and update Rogue Scholar, as an increasing proportion of the required functionality is built into InvenioRDM and developed and used by other repositories.</p><p>Please use&nbsp;<a href=\"https://join.slack.com/t/rogue-scholar/shared_invite/zt-2ylpq1yoy-o~TkxDarfz5LSMhGSCYtiA\" rel=\"noreferrer\">Slack</a>,&nbsp;<a href=\"mailto:info@rogue-scholar.org\" rel=\"noreferrer\">email</a>,&nbsp;<a href=\"https://wisskomm.social/@rogue_scholar\" rel=\"noreferrer\">Mastodon</a>, or&nbsp;<a href=\"https://bsky.app/profile/rogue-scholar.bsky.social\" rel=\"noreferrer\">Bluesky</a>&nbsp;if you have any questions or comments.</p><div class=\"kg-card kg-callout-card kg-callout-card-blue\"><div class=\"kg-callout-text\">Rogue Scholar is a scholarly infrastructure that is free for all authors and readers. You can support Rogue Scholar with a one-time or recurring&nbsp;<a href=\"https://ko-fi.com/rogue_scholar\" rel=\"noreferrer\">donation</a>&nbsp;or by becoming a sponsor.</div></div><h2 id=\"references\">References</h2><ol><li>Fenner, M. (2026, March 16). Increasing operational transparency in Rogue Scholar. <em>Front Matter</em>. <a href=\"https://doi.org/10.53731/3w24g-cdz85\">https://doi.org/10.53731/3w24g-cdz85</a></li><li>Fenner, M. (2026, March 26). Introducing Rogue Scholar community dashboards. <em>Front Matter</em>. <a href=\"https://doi.org/10.53731/809xc-y7r79\">https://doi.org/10.53731/809xc-y7r79</a></li><li>Fenner, M. (2026, April 1). Rogue Scholar improves blog self-management. <em>Front Matter</em>. <a href=\"https://doi.org/10.53731/dp6ra-trw41\">https://doi.org/10.53731/dp6ra-trw41</a></li></ol>","doi":"https://doi.org/10.53731/wfp26-6ej12","funding_references":null,"guid":"https://doi.org/10.53731/wfp26-6ej12","id":"a8281cac-3d8f-453f-a078-3e2cd2b74251","image":"https://images.unsplash.com/photo-1573500883557-6049a3ab38b6?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wxMTc3M3wwfDF8c2VhcmNofDQ1fHxlYXN0ZXJ8ZW58MHx8fHwxNzc1MTQzNTcwfDA&ixlib=rb-4.1.0&q=80&w=2000","indexed":true,"indexed_at":1775145877,"language":"en","parent_doi":null,"published_at":1775145556,"reference":[{"id":"https://doi.org/10.53731/3w24g-cdz85","type":"BlogPost","unstructured":"Fenner, M. (2026, March 16). Increasing operational transparency in Rogue Scholar. <i>Front Matter</i>. https://doi.org/10.53731/3w24g-cdz85"},{"id":"https://doi.org/10.53731/809xc-y7r79","type":"BlogPost","unstructured":"Fenner, M. (2026, March 26). Introducing Rogue Scholar community dashboards. <i>Front Matter</i>. https://doi.org/10.53731/809xc-y7r79"},{"id":"https://doi.org/10.53731/dp6ra-trw41","type":"BlogPost","unstructured":"Fenner, M. (2026, April 1). Rogue Scholar improves blog self-management. <i>Front Matter</i>. https://doi.org/10.53731/dp6ra-trw41"}],"registered_at":0,"relationships":[],"rid":"433q3-rg192","status":"active","summary":"Blogs added to Rogue Scholar  One blog was added in March.","tags":["Rogue Scholar","Newsletter"],"title":"Rogue Scholar Newsletter March 2026","updated_at":1775145556,"url":"https://blog.front-matter.de/posts/rogue-scholar-newsletter-march-2026/","version":"v1"},{"abstract":null,"archive_url":null,"authors":[{"contributor_roles":[],"family":"Turner","given":"Stephen D."}],"blog":{"archive_collection":null,"archive_host":null,"archive_prefix":null,"archive_timestamps":null,"authors":[{"name":"Stephen Turner"}],"canonical_url":null,"category":"biologicalSciences","community_id":"382941a7-2ffa-41df-8bbb-5f772188517f","created_at":1734172613,"current_feed_url":null,"description":"A practicing data scientist's take on AI, genomics, biosecurity, and the ways AI is reshaping how science gets done. Weekly updates from the field. Occasional notes on programming.","doi_as_guid":false,"favicon":null,"feed_format":"application/rss+xml","feed_url":"https://blog.stephenturner.us/feed","filter":null,"funding":null,"generator":"Substack","generator_raw":"Substack","home_page_url":"https://blog.stephenturner.us/","id":"bffe125c-3dfa-4f25-998f-e62878677c7c","indexed":true,"issn":null,"language":"en","license":"https://creativecommons.org/licenses/by/4.0/legalcode","mastodon":"https://bsky.app/profile/stephenturner.us","prefix":"10.59350","registered_at":0,"relative_url":null,"ror":null,"secure":true,"slug":"stephenturner","status":"active","subfield":"1311","subfield_validated":true,"title":"Paired Ends","updated_at":1775289119.319881,"use_api":null,"use_mastodon":false,"user_id":"ae63ef98-7475-4cc1-b3eb-244d5e096f0f"},"blog_name":"Paired Ends","blog_slug":"stephenturner","content_html":"<p>Earlier this week I wrote about a <a href=\"https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/enb2.70003\">paper</a> by Jacob Beal (Raytheon BBN Technologies) and Tessa Alexanian (International Biosecurity and Biosafety Initiative for Science, IBBIS) on creating enforceable biosecurity standards for nucleic acid providers. </p><div class=\"digest-post-embed\" data-attrs=\"{&quot;nodeId&quot;:&quot;da4d1060-ea03-4d5a-90cb-639598542e33&quot;,&quot;caption&quot;:&quot;Jacob Beal (Raytheon BBN Technologies) and Tessa Alexanian (International Biosecurity and Biosafety Initiative for Science) published a paper late last year:&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Creating Enforceable Biosecurity Standards for Nucleic Acid Providers&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:1536121,&quot;name&quot;:&quot;Stephen D. Turner&quot;,&quot;bio&quot;:&quot;https://stephenturner.us/&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!WGQE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1706730-c948-4acf-9c45-b14b4e3da1b9_651x651.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-03-30T13:31:54.951Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!OlIC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fadf87083-7065-457d-a497-5a9ce7d6287f_2128x798.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://blog.stephenturner.us/p/enforceable-biosecurity-standards-nucleic-acid-providers&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:182944060,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:2,&quot;comment_count&quot;:2,&quot;publication_id&quot;:161890,&quot;publication_name&quot;:&quot;Paired Ends&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!hfDI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F894081de-334e-4173-8a0c-e64762c2c838_1030x1030.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}\"></div><p>It\u2019s a good paper, and I recommend reading it! I noted toward the end of the post that the customer screening side felt a bit undercooked. <a href=\"https://tessa.fyi/\">Tessa Alexanian</a>, one of the paper\u2019s coauthors, <a href=\"https://blog.stephenturner.us/p/enforceable-biosecurity-standards-nucleic-acid-providers/comment/236846598\">left a comment</a> (thanks Tessa!) pointing me to <a href=\"https://ibbis.bio/translating-customer-screening-guidance-into-practical-tools/\">additional work</a> she and Sarah Carter had done on translating customer screening guidance into practical tools, and to a <a href=\"https://www.biorxiv.org/content/10.64898/2026.02.27.708645v1\">new preprint from Acelas et al.</a> evaluating AI-assisted customer verification for synthetic nucleic acid screening.</p><blockquote><p><strong>Acelas, A., Palya, H., Flyangolts, K., Fady, P. E., &amp; Nelson, C. (2026). Evaluating AI-Assisted Customer Verification for Synthetic Nucleic Acid Screening. bioRxiv 2026.02.27.708645; doi: <a href=\"https://doi.org/10.64898/2026.02.27.708645\">https://doi.org/10.64898/2026.02.27.708645</a>.</strong></p></blockquote><p>Here\u2019s the problem the paper addresses: When someone orders a synthetic nucleic acid that matches a sequence of concern, the provider needs to verify that the customer is who they say they are and has a legitimate reason to order it. This <em>legitimacy screening</em> involves checking institutional affiliations, email domains, sanctions lists, and relevant publications or patents. It\u2019s tedious, largely mechanical work, and the cost discourages adoption. Legitimacy screening runs roughly ten times more expensive per order than sequence screening alone.</p><p>Acelas et al. tested 5 LLMs (Claude Sonnet 4, Gemini 2.5 Pro, Grok 4, GLM 4.6, and MiniMax M2) on these verification tasks against a human baseline, using 41 customer profiles paired with simulated orders for sequences of concern. The best-performing model, Gemini 2.5 Pro equipped with bibliographic and sanctions APIs, achieved a 90% overall pass rate compared to about 80% for human screeners. Total cost per customer dropped from $14.04 for manual screening to $1.18 with AI assistance. For the information-gathering tasks alone (excluding human review of the final decision), the average was $0.23 per customer, roughly 50 times cheaper.</p><div class=\"captioned-image-container\"><figure><a class=\"image-link image2 is-viewable-img\" target=\"_blank\" href=\"https://substackcdn.com/image/fetch/$s_!SoOb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png\" data-component-name=\"Image2ToDOM\"><div class=\"image2-inset\"><picture><source type=\"image/webp\" srcset=\"https://substackcdn.com/image/fetch/$s_!SoOb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png 424w, https://substackcdn.com/image/fetch/$s_!SoOb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png 848w, https://substackcdn.com/image/fetch/$s_!SoOb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png 1272w, https://substackcdn.com/image/fetch/$s_!SoOb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png 1456w\" sizes=\"100vw\"><img src=\"https://substackcdn.com/image/fetch/$s_!SoOb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png\" width=\"651\" height=\"556.1233183856502\" data-attrs=\"{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:762,&quot;width&quot;:892,&quot;resizeWidth&quot;:651,&quot;bytes&quot;:125845,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.stephenturner.us/i/192939021?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" class=\"sizing-normal\" alt=\"\" srcset=\"https://substackcdn.com/image/fetch/$s_!SoOb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png 424w, https://substackcdn.com/image/fetch/$s_!SoOb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png 848w, https://substackcdn.com/image/fetch/$s_!SoOb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png 1272w, https://substackcdn.com/image/fetch/$s_!SoOb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png 1456w\" sizes=\"100vw\" fetchpriority=\"high\"></picture><div class=\"image-link-expand\"><div class=\"pencraft pc-display-flex pc-gap-8 pc-reset\"><button tabindex=\"0\" type=\"button\" class=\"pencraft pc-reset pencraft icon-container restack-image\"><svg role=\"img\" width=\"20\" height=\"20\" viewBox=\"0 0 20 20\" fill=\"none\" stroke-width=\"1.5\" stroke=\"var(--color-fg-primary)\" stroke-linecap=\"round\" stroke-linejoin=\"round\" xmlns=\"http://www.w3.org/2000/svg\"><g><title></title><path d=\"M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882\"></path></g></svg></button><button tabindex=\"0\" type=\"button\" class=\"pencraft pc-reset pencraft icon-container view-image\"><svg xmlns=\"http://www.w3.org/2000/svg\" width=\"20\" height=\"20\" viewBox=\"0 0 24 24\" fill=\"none\" stroke=\"currentColor\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\" class=\"lucide lucide-maximize2 lucide-maximize-2\"><polyline points=\"15 3 21 3 21 9\"></polyline><polyline points=\"9 21 3 21 3 15\"></polyline><line x1=\"21\" x2=\"14\" y1=\"3\" y2=\"10\"></line><line x1=\"3\" x2=\"10\" y1=\"21\" y2=\"14\"></line></svg></button></div></div></div></a><figcaption class=\"image-caption\">Table 2 from <a href=\"https://www.biorxiv.org/content/10.64898/2026.02.27.708645v1.full\">Acelas 2026</a>: Per-customer screening costs and processing times. \u201cInformation gathering\u201d covers Tasks 1\u20135 only; \u201ctotal cost\u201d adds the time cost of human review of the AI-generated report. For human baselines, these phases were not separated, so only totals are reported. Human costs estimated at $54/hour based on advertised salaries at a large DNA synthesis provider. AI costs include per-token API pricing and Tavily web search queries ($0.08/query); other tools were cost-free. All figures are averages across 41 customer profiles.</figcaption></figure></div><p>A couple things stood out. First, cost and performance were uncorrelated across models (Section 3.2 of the paper). The best model, Gemini 2.5 Pro, was also the second cheapest. Open-source models with lower per-token pricing lost their cost advantage through higher token consumption and more search queries. Second, giving models access to specialized tools (ORCID, Europe PMC, a sanctions list API) helped on most tasks but actually hurt on background work search, because models with API access performed fewer web searches and missed patents and news articles not indexed in academic databases (Section 3.1). Third, geographic variation in error rates. Chinese customers had notably higher missed-flag rates on email domain verification, largely because researchers there more often use personal rather than institutional email addresses (Section 3.3.1).</p><p>The authors are careful to note that the final ship-or-reject decision should stay with humans. AI handles the information gathering but a person decides what to do with it. This feels like the right framing, and as Tessa noted in her comment, the emergence of tools like <a href=\"https://github.com/alejoacelas/api-cliver\">Cliver</a> (the screening API released alongside this paper), means providers increasingly don\u2019t have to build this capability from scratch. That lowers the bar for adopting customer screening, which in turn makes it more reasonable to expect higher standards across the industry.</p><p class=\"button-wrapper\" data-attrs=\"{&quot;url&quot;:&quot;https://blog.stephenturner.us/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}\" data-component-name=\"ButtonCreateButton\"><a class=\"button primary\" href=\"https://blog.stephenturner.us/subscribe?\"><span>Subscribe now</span></a></p>","doi":"https://doi.org/10.59350/6xzxd-5kb71","funding_references":null,"guid":"192939021","id":"4061d058-d77f-497e-afbc-99776b3bd489","image":"https://substackcdn.com/image/fetch/$s_!SoOb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2dbc45f0-2131-481b-82ef-afa37306cd6c_892x762.png","indexed":true,"indexed_at":1775130458,"language":"en","parent_doi":null,"published_at":1775129325,"reference":[],"registered_at":0,"relationships":[],"rid":"d3psg-jz607","status":"active","summary":"A new preprint shows AI can handle legitimacy verification at a fraction of the cost.","tags":["Biosecurity","AI"],"title":"AI-Assisted Customer Screening for DNA Synthesis Orders","updated_at":1775129325,"url":"https://blog.stephenturner.us/p/ai-customer-screening-dna-synthesis","version":"v1"},{"abstract":null,"archive_url":null,"authors":[{"name":"Stephen Turner"}],"blog":{"archive_collection":null,"archive_host":null,"archive_prefix":null,"archive_timestamps":null,"authors":[{"name":"Stephen Turner"}],"canonical_url":null,"category":"biologicalSciences","community_id":"382941a7-2ffa-41df-8bbb-5f772188517f","created_at":1734172613,"current_feed_url":null,"description":"A practicing data scientist's take on AI, genomics, biosecurity, and the ways AI is reshaping how science gets done. Weekly updates from the field. Occasional notes on programming.","doi_as_guid":false,"favicon":null,"feed_format":"application/rss+xml","feed_url":"https://blog.stephenturner.us/feed","filter":null,"funding":null,"generator":"Substack","generator_raw":"Substack","home_page_url":"https://blog.stephenturner.us/","id":"bffe125c-3dfa-4f25-998f-e62878677c7c","indexed":true,"issn":null,"language":"en","license":"https://creativecommons.org/licenses/by/4.0/legalcode","mastodon":"https://bsky.app/profile/stephenturner.us","prefix":"10.59350","registered_at":0,"relative_url":null,"ror":null,"secure":true,"slug":"stephenturner","status":"active","subfield":"1311","subfield_validated":true,"title":"Paired Ends","updated_at":1775289119.319881,"use_api":null,"use_mastodon":false,"user_id":"ae63ef98-7475-4cc1-b3eb-244d5e096f0f"},"blog_name":"Paired Ends","blog_slug":"stephenturner","content_html":"<p><em>Hello, friends. This recap comes a day early because I\u2019ll be leaving tomorrow for a long overdue holiday in France. No updates next week. Au revoir mes amis. </em>\ud83c\uddeb\ud83c\uddf7\ud83e\uddc0\ud83c\udf77</p><p class=\"button-wrapper\" data-attrs=\"{&quot;url&quot;:&quot;https://blog.stephenturner.us/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}\" data-component-name=\"ButtonCreateButton\"><a class=\"button primary\" href=\"https://blog.stephenturner.us/subscribe?\"><span>Subscribe now</span></a></p><div><hr></div><p>Chris Lu, et al., in <em>Nature</em>: <strong><a href=\"https://www.nature.com/articles/s41586-026-10265-5\">Towards end-to-end automation of AI research</a></strong>. Sakana AI\u2019s \u201cAI Scientist\u201d pipeline handles the full ML research loop: ideation, literature search, experiment design and execution, paper writing, and automated peer review. One of its manuscripts scored above the acceptance threshold at an ICLR 2025 workshop (which had a 70% acceptance rate, to be fair). Paper quality as judged by their automated reviewer tracks closely with foundation model capability, and with compute budget per paper, which tells you where this is headed even if the current output isn\u2019t threatening anyone\u2019s tenure case. For a quicker summary, read <strong><a href=\"https://sakana.ai/ai-scientist-nature/\">Sakana\u2019s blog post</a></strong>.</p><div class=\"captioned-image-container\"><figure><a class=\"image-link image2 is-viewable-img\" target=\"_blank\" href=\"https://substackcdn.com/image/fetch/$s_!4UvV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png\" data-component-name=\"Image2ToDOM\"><div class=\"image2-inset\"><picture><source type=\"image/webp\" srcset=\"https://substackcdn.com/image/fetch/$s_!4UvV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png 424w, https://substackcdn.com/image/fetch/$s_!4UvV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png 848w, https://substackcdn.com/image/fetch/$s_!4UvV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png 1272w, https://substackcdn.com/image/fetch/$s_!4UvV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png 1456w\" sizes=\"100vw\"><img src=\"https://substackcdn.com/image/fetch/$s_!4UvV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png\" width=\"1456\" height=\"886\" data-attrs=\"{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:886,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:983871,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.stephenturner.us/i/192202894?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" class=\"sizing-normal\" alt=\"\" srcset=\"https://substackcdn.com/image/fetch/$s_!4UvV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png 424w, https://substackcdn.com/image/fetch/$s_!4UvV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png 848w, https://substackcdn.com/image/fetch/$s_!4UvV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png 1272w, https://substackcdn.com/image/fetch/$s_!4UvV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png 1456w\" sizes=\"100vw\" fetchpriority=\"high\"></picture><div class=\"image-link-expand\"><div class=\"pencraft pc-display-flex pc-gap-8 pc-reset\"><button tabindex=\"0\" type=\"button\" class=\"pencraft pc-reset pencraft icon-container restack-image\"><svg role=\"img\" width=\"20\" height=\"20\" viewBox=\"0 0 20 20\" fill=\"none\" stroke-width=\"1.5\" stroke=\"var(--color-fg-primary)\" stroke-linecap=\"round\" stroke-linejoin=\"round\" xmlns=\"http://www.w3.org/2000/svg\"><g><title></title><path d=\"M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882\"></path></g></svg></button><button tabindex=\"0\" type=\"button\" class=\"pencraft pc-reset pencraft icon-container view-image\"><svg xmlns=\"http://www.w3.org/2000/svg\" width=\"20\" height=\"20\" viewBox=\"0 0 24 24\" fill=\"none\" stroke=\"currentColor\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\" class=\"lucide lucide-maximize2 lucide-maximize-2\"><polyline points=\"15 3 21 3 21 9\"></polyline><polyline points=\"9 21 3 21 3 15\"></polyline><line x1=\"21\" x2=\"14\" y1=\"3\" y2=\"10\"></line><line x1=\"3\" x2=\"10\" y1=\"21\" y2=\"14\"></line></svg></button></div></div></div></a><figcaption class=\"image-caption\">Fig. 2 from <a href=\"https://www.nature.com/articles/s41586-026-10265-5\">Lu 2026</a>: Selected sections from a paper generated by The AI Scientist that was accepted via peer review at a top-tier machine learning conference workshop.</figcaption></figure></div><p><em>Counterpoint</em>: <span class=\"mention-wrap\" data-attrs=\"{&quot;name&quot;:&quot;Steven Salzberg&quot;,&quot;id&quot;:154387057,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9e2a53df-22f7-4cc0-a83e-4c32669682c9_144x144.png&quot;,&quot;uuid&quot;:&quot;d0df880f-6af0-4efc-9cf4-f70e2fbce023&quot;}\" data-component-name=\"MentionToDOM\"></span> writes <strong><a href=\"https://stevensalzberg.substack.com/p/ai-is-starting-to-look-like-pseudoscience\">AI badly needs a dose of skepticism</a></strong>. Salzberg goes after DNA foundation models, arguing that their central claim (predict the effects of any mutation from sequence alone) is biologically implausible and largely unfalsifiable, two properties he knows well from years of writing about pseudoscience nonsense (homeopathy, accupuncture). Teams build ever-larger models first, then go looking for problems, which is backwards. The core critique of unfalsifiable prediction claims and <em><a href=\"https://www.nature.com/articles/s41586-026-10265-5\">Nature</a></em><a href=\"https://www.nature.com/articles/s41586-026-10265-5\">\u2019s eagerness to publish them</a> is hard to dismiss. See above.</p><p><span class=\"mention-wrap\" data-attrs=\"{&quot;name&quot;:&quot;Arjun Raj&quot;,&quot;id&quot;:193849277,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:null,&quot;uuid&quot;:&quot;c1acc65f-8cbd-45b8-8d1c-37a26c4a9de2&quot;}\" data-component-name=\"MentionToDOM\"></span>: <strong><a href=\"https://arjunrajlab.substack.com/p/transitioning-to-being-a-pi-in-the\">Transitioning to Being a PI in the Age of AI</a></strong>. A short and honest post about the asymmetry in how faculty and trainees experience the current AI moment in computational biology. Faculty are exhilarated because they\u2019ve spent years developing the skill of evaluating analyses without doing them line by line; trainees are more ambivalent because they\u2019re being asked to make that same transition in months rather than years or decades. </p><p>My SDS colleague Heman Shakeri released full materials for his <strong><a href=\"https://shakeri-lab.github.io/dl-course-site/\">Deep Learning Course</a></strong> here at UVA. A complete, openly licensed (CC BY 4.0) deep learning course from UVA\u2019s School of Data Science, built for the online MSDS program (DS 6050) and public since Fall 2025. The 12-module sequence starts with NumPy-first implementations of MLPs and backpropagation, moves through CNNs, RNNs, encoder-decoder architectures, and the full attention/transformer stack, and finishes with ViTs, LoRA/QLoRA, and generative models including diffusion. Each module has lecture videos, notes, slides, and Colab assignments with unit tests. The <a href=\"https://shakeri-lab.github.io/dl-course-site/syllabus.pdf\">syllabus</a> lays out the pedagogical logic: three phases moving from from-scratch understanding to architectural depth to modern practice. Too much deep learning education lives in disconnected repos and YouTube playlists; having everything in one structured, reusable site with a clear arc is more valuable than any single component on its own.</p><p class=\"button-wrapper\" data-attrs=\"{&quot;url&quot;:&quot;https://blog.stephenturner.us/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}\" data-component-name=\"ButtonCreateButton\"><a class=\"button primary\" href=\"https://blog.stephenturner.us/subscribe?\"><span>Subscribe now</span></a></p><p>Carl Zimmer, NYT: <strong><a href=\"https://www.nytimes.com/2026/03/26/science/biotechnology-pharmaceuticals-eggs.html?unlocked_article_code=1.WVA.g9xY.xi8SRx_pwAC9&amp;smid=url-share\">How to Turn a Chicken Egg Into a Drug Factory</a></strong>. <a href=\"https://www.neionbio.com/\">Neion Bio</a>, a startup that emerged from stealth this week, is engineering chickens whose eggs produce pharmaceutical proteins, potentially replacing the Chinese hamster ovary (CHO) cell lines that currently dominate biologic drug manufacturing. The company claims 3,900 hens could meet global demand for Humira at a fraction of the cost of a CHO facility (Merck just broke ground on a $1B Keytruda plant, for comparison). Sven Bocklandt, Neion's chief scientific officer, was a colleague of mine at Colossal, where we worked on the dire wolf program together. Zimmer's writeup (great as usual) discussesthe history of how CHO cells became the default and why advances in primordial germ cell manipulation are finally making avian biomanufacturing viable.</p><p>New NIH Highlighted Topic: <strong><a href=\"https://grants.nih.gov/funding/find-a-fit-for-your-research/highlighted-topics/54\">Advancing \u201cScience of Science\u201d Research to Understand and Strengthen the Biomedical Research Ecosystem</a></strong>. These are not NOFOs, but descriptions of scientific areas that NIH ICOs are interested in funding through existing parent announcements. This one encourages investigator-initiated applications on the \u201cscience of science,\u201d the study of how the biomedical research ecosystem itself works. Topics include workforce retention, research capacity building, rigor and reproducibility, translation bottlenecks, and the economic returns of research investment.</p><div class=\"captioned-image-container\"><figure><a class=\"image-link image2 is-viewable-img\" target=\"_blank\" href=\"https://substackcdn.com/image/fetch/$s_!31mJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2f26291-34e1-4dbb-a6dc-23147bd6ddb3_1024x678.png\" data-component-name=\"Image2ToDOM\"><div class=\"image2-inset\"><picture><source type=\"image/webp\" srcset=\"https://substackcdn.com/image/fetch/$s_!31mJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2f26291-34e1-4dbb-a6dc-23147bd6ddb3_1024x678.png 424w, https://substackcdn.com/image/fetch/$s_!31mJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2f26291-34e1-4dbb-a6dc-23147bd6ddb3_1024x678.png 848w, https://substackcdn.com/image/fetch/$s_!31mJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2f26291-34e1-4dbb-a6dc-23147bd6ddb3_1024x678.png 1272w, https://substackcdn.com/image/fetch/$s_!31mJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2f26291-34e1-4dbb-a6dc-23147bd6ddb3_1024x678.png 1456w\" sizes=\"100vw\"><img src=\"https://substackcdn.com/image/fetch/$s_!31mJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2f26291-34e1-4dbb-a6dc-23147bd6ddb3_1024x678.png\" width=\"1024\" height=\"678\" data-attrs=\"{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b2f26291-34e1-4dbb-a6dc-23147bd6ddb3_1024x678.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:678,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:170992,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.stephenturner.us/i/192202894?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2f26291-34e1-4dbb-a6dc-23147bd6ddb3_1024x678.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" class=\"sizing-normal\" alt=\"\" srcset=\"https://substackcdn.com/image/fetch/$s_!31mJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2f26291-34e1-4dbb-a6dc-23147bd6ddb3_1024x678.png 424w, https://substackcdn.com/image/fetch/$s_!31mJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2f26291-34e1-4dbb-a6dc-23147bd6ddb3_1024x678.png 848w, https://substackcdn.com/image/fetch/$s_!31mJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2f26291-34e1-4dbb-a6dc-23147bd6ddb3_1024x678.png 1272w, https://substackcdn.com/image/fetch/$s_!31mJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2f26291-34e1-4dbb-a6dc-23147bd6ddb3_1024x678.png 1456w\" sizes=\"100vw\" loading=\"lazy\"></picture><div class=\"image-link-expand\"><div class=\"pencraft pc-display-flex pc-gap-8 pc-reset\"><button tabindex=\"0\" type=\"button\" class=\"pencraft pc-reset pencraft icon-container restack-image\"><svg role=\"img\" width=\"20\" height=\"20\" viewBox=\"0 0 20 20\" fill=\"none\" stroke-width=\"1.5\" stroke=\"var(--color-fg-primary)\" stroke-linecap=\"round\" stroke-linejoin=\"round\" xmlns=\"http://www.w3.org/2000/svg\"><g><title></title><path d=\"M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882\"></path></g></svg></button><button tabindex=\"0\" type=\"button\" class=\"pencraft pc-reset pencraft icon-container view-image\"><svg xmlns=\"http://www.w3.org/2000/svg\" width=\"20\" height=\"20\" viewBox=\"0 0 24 24\" fill=\"none\" stroke=\"currentColor\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\" class=\"lucide lucide-maximize2 lucide-maximize-2\"><polyline points=\"15 3 21 3 21 9\"></polyline><polyline points=\"9 21 3 21 3 15\"></polyline><line x1=\"21\" x2=\"14\" y1=\"3\" y2=\"10\"></line><line x1=\"3\" x2=\"10\" y1=\"21\" y2=\"14\"></line></svg></button></div></div></div></a></figure></div><p>Yet another new NIH Highlighted Topic: <strong><a href=\"https://grants.nih.gov/funding/find-a-fit-for-your-research/highlighted-topics/19\">BRAIN Initiative: Advancing Human Neuroscience and Precision Molecular Therapies for Transformative Treatments</a>. </strong>This one covers the <a href=\"https://braininitiative.nih.gov/\">BRAIN Initiative</a>\u2019s priorities in human neural circuit research, clinical neurotechnology, and precision molecular therapies (optogenetics, chemogenetics). 11 ICOs are listed as participating.</p><div class=\"captioned-image-container\"><figure><a class=\"image-link image2 is-viewable-img\" target=\"_blank\" href=\"https://grants.nih.gov/funding/find-a-fit-for-your-research/highlighted-topics/19\" data-component-name=\"Image2ToDOM\"><div class=\"image2-inset\"><picture><source type=\"image/webp\" srcset=\"https://substackcdn.com/image/fetch/$s_!1Hqo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21e20f9b-608e-4cb7-92ce-8511c6960562_1259x771.png 424w, https://substackcdn.com/image/fetch/$s_!1Hqo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21e20f9b-608e-4cb7-92ce-8511c6960562_1259x771.png 848w, https://substackcdn.com/image/fetch/$s_!1Hqo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21e20f9b-608e-4cb7-92ce-8511c6960562_1259x771.png 1272w, https://substackcdn.com/image/fetch/$s_!1Hqo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21e20f9b-608e-4cb7-92ce-8511c6960562_1259x771.png 1456w\" sizes=\"100vw\"><img src=\"https://substackcdn.com/image/fetch/$s_!1Hqo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21e20f9b-608e-4cb7-92ce-8511c6960562_1259x771.png\" width=\"1259\" height=\"771\" data-attrs=\"{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/21e20f9b-608e-4cb7-92ce-8511c6960562_1259x771.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:771,&quot;width&quot;:1259,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:277881,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://grants.nih.gov/funding/find-a-fit-for-your-research/highlighted-topics/19&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.stephenturner.us/i/192202894?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21e20f9b-608e-4cb7-92ce-8511c6960562_1259x771.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" class=\"sizing-normal\" alt=\"\" srcset=\"https://substackcdn.com/image/fetch/$s_!1Hqo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21e20f9b-608e-4cb7-92ce-8511c6960562_1259x771.png 424w, https://substackcdn.com/image/fetch/$s_!1Hqo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21e20f9b-608e-4cb7-92ce-8511c6960562_1259x771.png 848w, https://substackcdn.com/image/fetch/$s_!1Hqo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21e20f9b-608e-4cb7-92ce-8511c6960562_1259x771.png 1272w, https://substackcdn.com/image/fetch/$s_!1Hqo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21e20f9b-608e-4cb7-92ce-8511c6960562_1259x771.png 1456w\" sizes=\"100vw\" loading=\"lazy\"></picture><div class=\"image-link-expand\"><div class=\"pencraft pc-display-flex pc-gap-8 pc-reset\"><button tabindex=\"0\" type=\"button\" class=\"pencraft pc-reset pencraft icon-container restack-image\"><svg role=\"img\" width=\"20\" height=\"20\" viewBox=\"0 0 20 20\" fill=\"none\" stroke-width=\"1.5\" stroke=\"var(--color-fg-primary)\" stroke-linecap=\"round\" stroke-linejoin=\"round\" xmlns=\"http://www.w3.org/2000/svg\"><g><title></title><path d=\"M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882\"></path></g></svg></button><button tabindex=\"0\" type=\"button\" class=\"pencraft pc-reset pencraft icon-container view-image\"><svg xmlns=\"http://www.w3.org/2000/svg\" width=\"20\" height=\"20\" viewBox=\"0 0 24 24\" fill=\"none\" stroke=\"currentColor\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\" class=\"lucide lucide-maximize2 lucide-maximize-2\"><polyline points=\"15 3 21 3 21 9\"></polyline><polyline points=\"9 21 3 21 3 15\"></polyline><line x1=\"21\" x2=\"14\" y1=\"3\" y2=\"10\"></line><line x1=\"3\" x2=\"10\" y1=\"21\" y2=\"14\"></line></svg></button></div></div></div></a></figure></div><p>More NIH news: <strong><a href=\"https://grants.nih.gov/grants/guide/notice-files/NOT-OD-26-064.html\">NOT-OD-26-064: Update of NIH Late Application Submission Policy and End of Continuous Submission</a></strong>. NIH is ending its Continuous Submission policy, which let PIs serving on review panels submit applications outside normal deadlines. Effective for due dates on or after May 25, 2026.</p><p><strong><a href=\"https://content.govdelivery.com/accounts/USNSF/bulletins/410a918\">TIP Leadership Update</a></strong>. NSF's Erwin Gianchandani announces the retirement of Gracie Narcho, who served as deputy assistant director and directorate head for the Technology, Innovation and Partnerships directorate since its founding. Gianchandani credits Narcho with co-authoring the vision that became TIP before it had authorizing legislation, and with launching programs like the NSF Regional Innovation Engines and the I-Corps Hubs during a career spanning three decades and multiple NSF directorates.</p><p>Austin Dickey: <strong><a href=\"https://positron.posit.co/blog/posts/2026-03-31-python-type-checkers/\">How we chose Positron's Python type checker</a></strong>. Posit evaluated 4 open-source Python language servers (Pyrefly, ty, Basedpyright, Zuban) across features, correctness, performance, and ecosystem health, then chose Meta's Pyrefly as Positron's default. The most interesting section is the comparison of type-checking philosophies: ty follows a \"gradual guarantee\" where removing a type annotation never introduces an error, while Pyrefly infers types aggressively even in untyped code. Good overview of a space that's moving fast.</p><div class=\"captioned-image-container\"><figure><a class=\"image-link image2 is-viewable-img\" target=\"_blank\" href=\"https://substackcdn.com/image/fetch/$s_!BPpJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53ecc9a3-03c1-4865-b5ac-c153ac9e2275_816x255.png\" data-component-name=\"Image2ToDOM\"><div class=\"image2-inset\"><picture><source type=\"image/webp\" srcset=\"https://substackcdn.com/image/fetch/$s_!BPpJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53ecc9a3-03c1-4865-b5ac-c153ac9e2275_816x255.png 424w, https://substackcdn.com/image/fetch/$s_!BPpJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53ecc9a3-03c1-4865-b5ac-c153ac9e2275_816x255.png 848w, https://substackcdn.com/image/fetch/$s_!BPpJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53ecc9a3-03c1-4865-b5ac-c153ac9e2275_816x255.png 1272w, https://substackcdn.com/image/fetch/$s_!BPpJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53ecc9a3-03c1-4865-b5ac-c153ac9e2275_816x255.png 1456w\" sizes=\"100vw\"><img src=\"https://substackcdn.com/image/fetch/$s_!BPpJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53ecc9a3-03c1-4865-b5ac-c153ac9e2275_816x255.png\" width=\"816\" height=\"255\" data-attrs=\"{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/53ecc9a3-03c1-4865-b5ac-c153ac9e2275_816x255.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:255,&quot;width&quot;:816,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:33931,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.stephenturner.us/i/192202894?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53ecc9a3-03c1-4865-b5ac-c153ac9e2275_816x255.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" class=\"sizing-normal\" alt=\"\" srcset=\"https://substackcdn.com/image/fetch/$s_!BPpJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53ecc9a3-03c1-4865-b5ac-c153ac9e2275_816x255.png 424w, https://substackcdn.com/image/fetch/$s_!BPpJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53ecc9a3-03c1-4865-b5ac-c153ac9e2275_816x255.png 848w, https://substackcdn.com/image/fetch/$s_!BPpJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53ecc9a3-03c1-4865-b5ac-c153ac9e2275_816x255.png 1272w, https://substackcdn.com/image/fetch/$s_!BPpJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53ecc9a3-03c1-4865-b5ac-c153ac9e2275_816x255.png 1456w\" sizes=\"100vw\" loading=\"lazy\"></picture><div class=\"image-link-expand\"><div class=\"pencraft pc-display-flex pc-gap-8 pc-reset\"><button tabindex=\"0\" type=\"button\" class=\"pencraft pc-reset pencraft icon-container restack-image\"><svg role=\"img\" width=\"20\" height=\"20\" viewBox=\"0 0 20 20\" fill=\"none\" stroke-width=\"1.5\" stroke=\"var(--color-fg-primary)\" stroke-linecap=\"round\" stroke-linejoin=\"round\" xmlns=\"http://www.w3.org/2000/svg\"><g><title></title><path d=\"M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882\"></path></g></svg></button><button tabindex=\"0\" type=\"button\" class=\"pencraft pc-reset pencraft icon-container view-image\"><svg xmlns=\"http://www.w3.org/2000/svg\" width=\"20\" height=\"20\" viewBox=\"0 0 24 24\" fill=\"none\" stroke=\"currentColor\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\" class=\"lucide lucide-maximize2 lucide-maximize-2\"><polyline points=\"15 3 21 3 21 9\"></polyline><polyline points=\"9 21 3 21 3 15\"></polyline><line x1=\"21\" x2=\"14\" y1=\"3\" y2=\"10\"></line><line x1=\"3\" x2=\"10\" y1=\"21\" y2=\"14\"></line></svg></button></div></div></div></a></figure></div><p>Mario Zechner: <strong><a href=\"https://mariozechner.at/posts/2026-03-25-thoughts-on-slowing-the-fuck-down/\">Thoughts on slowing the f*ck down</a></strong>. A year into production use of coding agents, Zechner argues that the compounding of small errors at machine speed, combined with agents\u2019 inability to learn from mistakes and their low-recall search over large codebases, is producing unmaintainable messes far faster than human teams ever could. The prescription: treat agents as task-level tools with humans as the quality gate, write your architecture by hand, and set deliberate limits on how much generated code you accept per day.</p><p>Theo Roe: <strong><a href=\"https://www.jumpingrivers.com/blog/why-learning-r-is-a-good-career-move-in-2026/\">Why Learning R is a Good Career Move in 2026</a></strong>. A short, beginner-oriented pitch from Jumping Rivers (an R training company, so calibrate accordingly) making the case for R as a first language for data work. Nothing new for experienced practitioners, but a reasonable overview of where R still has a strong foothold: healthcare, pharma, government, academic research, and anywhere visualization and reproducible reporting are central. The honest caveat at the end is useful: if you want software engineering or large-scale production systems, you probably need Python.</p><p><span class=\"mention-wrap\" data-attrs=\"{&quot;name&quot;:&quot;Matt Lubin&quot;,&quot;id&quot;:397303631,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/924242ef-2a2d-4a0c-9fac-a506e969de5c_967x967.png&quot;,&quot;uuid&quot;:&quot;cb043d8d-400f-4e38-a6b3-2f76b3ae62c5&quot;}\" data-component-name=\"MentionToDOM\"></span> at <span class=\"mention-wrap\" data-attrs=\"{&quot;name&quot;:&quot;Bio-Security Stack&quot;,&quot;id&quot;:6407314,&quot;type&quot;:&quot;pub&quot;,&quot;url&quot;:&quot;https://open.substack.com/pub/mattsbiodefense&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d1f148d3-2c56-4650-b623-0f42ff4cbd44_1280x1280.png&quot;,&quot;uuid&quot;:&quot;9b422090-35bf-4bdc-8d8c-5e3fa295d30a&quot;}\" data-component-name=\"MentionToDOM\"></span>: <strong><a href=\"https://mattsbiodefense.substack.com/p/five-things-march-29-2026\">Five Things: March 29, 2026</a></strong>: Anthropic temporary win, scheming, biodesign by LLM, White House advisors, Anthropic security.</p><p>Ryan Layer: <strong><a href=\"https://ryanlayerlab.github.io/layerlab/2026/03/23/What-Do-I-Teach-Now.html\">What do I teach now?</a></strong>. Ryan has taught Software Engineering for Scientists at CU Boulder since 2019, and coding agents have forced him to rethink the whole course. In science, code is the method, so vibe coding is a reproducibility problem in addition to being a quality problem. He\u2019s now rebuilding the class around open questions like who audits AI-generated analyses in ten years if no one learns to build from scratch.</p><blockquote><p>The thought of my students building software by prompting and accepting the output without reading the code keeps me up at night. [\u2026] For science, where the code is the method, vibe coding is not an option.</p></blockquote><p><span class=\"mention-wrap\" data-attrs=\"{&quot;name&quot;:&quot;Claus Wilke&quot;,&quot;id&quot;:64064132,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f86ed0b8-faec-478f-9afa-6a59f2c148fc_2000x2000.png&quot;,&quot;uuid&quot;:&quot;fea6e5fe-621b-4456-9b7f-ac55426724d2&quot;}\" data-component-name=\"MentionToDOM\"></span> at <span class=\"mention-wrap\" data-attrs=\"{&quot;name&quot;:&quot;Genes, Minds, Machines&quot;,&quot;id&quot;:5419410,&quot;type&quot;:&quot;pub&quot;,&quot;url&quot;:&quot;https://open.substack.com/pub/clauswilke&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4b85fecd-da20-4614-b9b3-54f277cfa6bd_982x982.png&quot;,&quot;uuid&quot;:&quot;80ee765e-a2f6-4d86-8422-9d30972920ef&quot;}\" data-component-name=\"MentionToDOM\"></span>: <strong><a href=\"https://blog.genesmindsmachines.com/p/creating-reproducible-data-analysis\">Creating reproducible data analysis pipelines</a></strong>. A case against the \u201crun everything from raw data\u201d ideal of reproducibility. Claus argues that intermediate CSV files saved right before plotting are more durable than any end-to-end pipeline: pipelines break, Docker images rot,<a class=\"footnote-anchor\" data-component-name=\"FootnoteAnchorToDOM\" id=\"footnote-anchor-1\" href=\"#footnote-1\" target=\"_self\">1</a> and students (and PIs!) lose afternoons rerunning everything to swap a violin plot for a boxplot.</p><p><strong><a href=\"https://ropensci.org/blog/2026/03/30/news-mars-2026/\">rOpenSci News Digest, March 2026</a></strong>: dev guide, champions program, software review and usage of AI tools.</p><p>Joe Rickert: <strong><a href=\"https://rworks.dev/posts/Feb-2026-Top40/\">February 2026 Top 40 New CRAN Packages</a></strong>: AI, machine learning, biology, medical applications, physics, Buddhism, statistics, climate science, computational methods, data, surveys, ecology, time series, epidemiology, utilities, genomics, and visualization. </p><p><strong><a href=\"https://rweekly.org/2026-W14.html\">R Weekly 2026-W14</a>:</strong> ggauto, alt text, scientific coffee.</p><p class=\"button-wrapper\" data-attrs=\"{&quot;url&quot;:&quot;https://blog.stephenturner.us/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}\" data-component-name=\"ButtonCreateButton\"><a class=\"button primary\" href=\"https://blog.stephenturner.us/subscribe?\"><span>Subscribe now</span></a></p><p>Max Kuhn: <strong><a href=\"https://tidyverse.org/blog/2026/03/tabpfn-0-1-0/\">tabpfn 0.1.0</a></strong>. An R interface (via reticulate) to TabPFN, a pre-trained deep learning model for tabular prediction from PriorLabs (I wrote a <a href=\"https://blog.stephenturner.us/i/156727044/accurate-predictions-on-small-data-with-a-tabular-foundation-model\">this short summary of TabPFN</a> last year). The model was trained entirely on synthetic data generated from complex graph models simulating correlation structures, skewness, missing data, interactions, and more. No fitting happens on your data; your training set primes an attention mechanism via in-context learning.</p><p><span class=\"mention-wrap\" data-attrs=\"{&quot;name&quot;:&quot;Elizabeth Ginexi&quot;,&quot;id&quot;:129927491,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/287d0a29-48a9-4913-81f3-0e8bd4a3dc73_1346x1346.jpeg&quot;,&quot;uuid&quot;:&quot;667386a9-1e8c-45c9-82f1-a9afc9c0a70a&quot;}\" data-component-name=\"MentionToDOM\"></span>: <strong><a href=\"https://elizabethginexi.substack.com/p/inside-the-nih-forecast-graveyard\">Inside the NIH Forecast Graveyard</a></strong>. An accounting of NIH funding opportunities that were announced on grants.gov and then never published. Of 336 open forecasts, 205 have passed their promised posting dates with no explanation. The first wave of cancellations in April 2025 was keyword-driven (DEI, HIV, health disparities), but the later waves and the larger mass of silently expiring forecasts hit basic science, clinical infrastructure, and congressionally mandated programs like the BRAIN Initiative and Gabriella Miller Kids First. Ginexi, a former NIH insider, makes the dataset available for anyone to check.</p><p>Niko McCarty: <strong><a href=\"https://nikomc.com/2026/04/01/optogenetics-serendipity/\">Many Great Inventions Weren\u2019t Made by \u201cSerendipity\u201d</a></strong>. Niko uses <a href=\"https://en.wikipedia.org/wiki/Optogenetics\">optogenetics</a> as the central case for a broader argument: the breakthroughs we narrate as lucky accidents were usually preceded by years of deliberate preparation and systematic enumeration of possible solutions. </p><p><strong>New papers &amp; preprints:</strong></p><ul><li><p><a href=\"https://www.nature.com/articles/s41586-026-10265-5\">Towards end-to-end automation of AI research</a></p></li><li><p><a href=\"https://academic.oup.com/bib/article/27/2/bbag131/8553189\">Toward next-generation machine learning and deep learning for spatial omics</a></p></li><li><p><a href=\"https://rdcu.be/faCkJ\">High-resolution metagenome assembly for modern long reads with myloasm</a></p></li><li><p><a href=\"https://www.nejm.org/doi/full/10.1056/NEJMp2516973\">The Age Illusion \u2014 Limitations of Chronologic Age in Medicine</a></p></li><li><p><a href=\"https://rdcu.be/faJsm\">Accelerating coral assisted evolution to keep pace with climate change</a></p></li><li><p><a href=\"https://rdcu.be/faNfU\">SNP calling, haplotype phasing and allele-specific analysis with long RNA-seq reads</a></p></li><li><p><a href=\"https://academic.oup.com/cid/advance-article/doi/10.1093/cid/ciag034/8540088\">State AIDS Drug Assistance Programs\u2019 Contribution to the US Viral Suppression, 2015\u20132022</a> </p></li><li><p><a href=\"https://www.nature.com/articles/s41592-026-03047-4\">AlphaFold as a prior: experimental structure determination conditioned on a pretrained neural network</a></p></li></ul><p class=\"button-wrapper\" data-attrs=\"{&quot;url&quot;:&quot;https://blog.stephenturner.us/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}\" data-component-name=\"ButtonCreateButton\"><a class=\"button primary\" href=\"https://blog.stephenturner.us/subscribe?\"><span>Subscribe now</span></a></p><div class=\"footnote\" data-component-name=\"FootnoteToDOM\"><a id=\"footnote-1\" href=\"#footnote-anchor-1\" class=\"footnote-number\" contenteditable=\"false\" target=\"_self\">1</a><div class=\"footnote-content\"><p>Paper on this topic coming soon. Stay tuned.</p><p></p></div></div>","doi":"https://doi.org/10.59350/fd6cm-etd59","funding_references":null,"guid":"192202894","id":"883fde4c-ea35-470d-9f1b-22b8ac1b4c84","image":"https://substackcdn.com/image/fetch/$s_!4UvV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ddd7197-4ed9-4e70-bb10-d266405aff92_2168x1320.png","indexed":true,"indexed_at":1775119052,"language":"en","parent_doi":null,"published_at":1775118358,"reference":[],"registered_at":0,"relationships":[],"rid":"gczhc-r8q91","status":"active","summary":"AI automating AI research, AI + being PI, DL course, Neion Bio, NIH highlighted topics, TIP, Python type checking in Positron, R updates, biosecurity, NIH forecast graveyard, serendipity, new papers.","tags":["Papers","R ","AI","Python"],"title":"Weekly Recap (April 2, 2026)","updated_at":1775118358,"url":"https://blog.stephenturner.us/p/weekly-recap-april-2-2026","version":"v1"},{"abstract":"Since our founding in 2009, DataCite\u2019s work has been guided by a singular and shared commitment to building and sustaining open infrastructure that everyone can participate in and benefit from. We are a community where all research organizations can belong and where all research outputs, resources, and activities can be shared, discovered, and connected.","archive_url":null,"authors":[{"contributor_roles":[],"name":"DataCite Staff"}],"blog":{"archive_collection":23763,"archive_host":null,"archive_prefix":null,"archive_timestamps":null,"authors":[{"name":"DataCite Staff"}],"canonical_url":null,"category":"computerAndInformationSciences","community_id":"916f4925-a9f6-4b4d-b823-c769ef054f15","created_at":1733579959,"current_feed_url":null,"description":"Connecting Research, Advancing Knowledge","doi_as_guid":false,"favicon":"https://rogue-scholar.org/api/communities/916f4925-a9f6-4b4d-b823-c769ef054f15/logo","feed_format":"application/atom+xml","feed_url":"https://datacite.org/blog/feed/atom/","filter":null,"funding":null,"generator":"WordPress","generator_raw":"WordPress","home_page_url":"https://datacite.org/","id":"127eb888-8cbe-4afc-a6f8-b58adffec39f","indexed":true,"issn":null,"language":"en","license":"https://creativecommons.org/licenses/by/4.0/legalcode","mastodon":null,"prefix":null,"registered_at":0,"relative_url":null,"ror":null,"secure":true,"slug":"datacite","status":"active","subfield":"1710","subfield_validated":null,"title":"DataCite Blog - DataCite","updated_at":1775288946.721507,"use_api":true,"use_mastodon":false,"user_id":"dead81b3-8a8b-45c9-85fe-f01bb3948c77"},"blog_name":"DataCite Blog - DataCite","blog_slug":"datacite","content_html":"\n<p>Since our founding in 2009, DataCite\u2019s work has been guided by a singular and shared commitment to building and sustaining open infrastructure that everyone can participate in and benefit from. We are a community where all research organizations can belong and where all research outputs, resources, and activities can be shared, discovered, and connected.&nbsp;</p>\n\n\n\n<p>We have followed that founding principle while developing programs and services that address ongoing and emerging use cases, expand global access, and support long-term sustainability. And we\u2019ve taken steps over the years to adapt our membership model alongside these developments, in consultation with our members, Executive Board, and broader community.&nbsp;</p>\n\n\n\n<p>We are now taking the next step forward in this journey.</p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"></div>\n\n\n\n<h2 class=\"wp-block-heading\">What&#8217;s changing</h2>\n\n\n\n<p>Starting this month, we\u2019re introducing key updates to our <a href=\"https://datacite.org/fees\" target=\"_blank\" rel=\"noreferrer noopener\">membership fee structure</a> to re-align with the guiding values behind our origins and our vision for the future.</p>\n\n\n\n<p>We\u2019re making a deliberate shift from a transactional model based on DOI registration quantities to a collective funding model focused on supporting shared open infrastructure. This means that DataCite&#8217;s standard fee structure will no longer include per-DOI fees or fees based on DOI quantities.&nbsp;</p>\n\n\n\n<p>As part of this change, we\u2019re also simplifying how fees are applied and adjusting costs based on <a href=\"https://fees.datacite.org/countries\" target=\"_blank\" rel=\"noreferrer noopener\">country-level economic indicators</a> to achieve a more balanced distribution across member organizations.</p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"></div>\n\n\n\n<h2 class=\"wp-block-heading\">Why this matters</h2>\n\n\n\n<p>DataCite has always been focused on community-driven infrastructure, and we&#8217;ve never been just about DOIs. Moving away from a DOI-centric pricing structure removes disincentives to making all outputs and activities broadly accessible. It allows us to shift the focus from the cost of a single DOI to the potential that can be achieved through rich metadata, lasting connections, and long-term stewardship.&nbsp;</p>\n\n\n\n<p>A simpler and more equitable fee model makes it easier for organizations to contribute to and benefit from shared open infrastructure. This isn\u2019t just about inclusion. It\u2019s also about investing in the quality and completeness of the global research record. Our infrastructure and our metadata stores become more valuable when they are used by and available for everyone.&nbsp;</p>\n\n\n\n<p>We have always supported multiple pathways to participation and multiple ways to use our infrastructure. These updates to the membership fee structure continue to broaden pathways of participation and advance DataCite\u2019s vision of shared ownership, where all organizations can engage in a way that works for them, whether that means accessing services directly, participating in a consortium to share costs and engage in communities of practice, or investing funds in DataCite\u2019s mission.&nbsp;</p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"></div>\n\n\n\n<h2 class=\"wp-block-heading\">Why now</h2>\n\n\n\n<p>DataCite metadata and metadata retrieval tools have always been freely and openly available to anyone. As a membership association, we sustain our operations through fees additional member-only services and to participate in DataCite governance.\u00a0These fees are determined by the membership and Executive Board according to our <a href=\"https://datacite.org/wp-content/uploads/2023/06/Statutes_26April2022.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">statutes</a>, and are designed to support cost recovery and long-term sustainability of DataCite infrastructure while ensuring equitable global access to DataCite membership and services. The fee structure has evolved over the years, and was last updated in 2020.</p>\n\n\n\n<p>As the DataCite community has continued to grow, so has the scale and diversity of how and where DataCite infrastructure is used. A fee model tied closely to DOI volume no longer reflects the full meaning of participation, nor does it support the broadest possible engagement. At the same time, there is increasing recognition across the research ecosystem that shared infrastructure requires shared investment. Shifting from a transactional model to a collective one positions DataCite to more tightly align sustainability with mission.</p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"></div>\n\n\n\n<h2 class=\"wp-block-heading\">What remains constant</h2>\n\n\n\n<p>While our fee structure is evolving, DataCite&#8217;s services, governance model, and commitment to open infrastructure remain constant. Our membership program, statutes, and fees will continue to be shaped by our General Assembly and Executive Board, while we continue to support existing members in achieving their goals and engage with new organizations joining the community through the pathway that best meets their needs.&nbsp;</p>\n\n\n\n<p>If you are not yet part of the DataCite member community and would like to learn more about <a href=\"https://datacite.org/become-a-member\" target=\"_blank\" rel=\"noreferrer noopener\">membership pathways and benefits</a>, we invite you to <a href=\"mailto:support@datacite.org\" target=\"_blank\" rel=\"noreferrer noopener\">contact our community team</a> and join our <a href=\"https://datacite.org/event/datacite-membership-essentials/\" target=\"_blank\" rel=\"noreferrer noopener\">open community webinar</a> next month. If you\u2019re ready to get started right now, you can submit a <a href=\"https://datacite.org/membership-inquiry\" target=\"_blank\" rel=\"noreferrer noopener\">membership inquiry</a>.\u00a0</p>\n\n\n\n<p>We look forward to welcoming more organizations into the DataCite community, and to continuing to build open research infrastructure together.</p>\n\n\n\n<p></p>\n","doi":"https://doi.org/10.5438/gc07-ah64","funding_references":null,"guid":"https://datacite.org/?p=14888","id":"fc9f3941-c9ae-4394-b28c-00785f51da5d","image":"https://datacite.org/wp-content/uploads/2026/04/Datacite_Social_Media_Blog_post_banner_DataCite_fee_update_2.png","indexed":true,"indexed_at":1775197725,"language":"en","parent_doi":null,"published_at":1775111528,"reference":[],"registered_at":0,"relationships":[],"rid":"aftd5-dfw64","status":"active","summary":"Since our founding in 2009, DataCite\u2019s work has been guided by a singular and shared commitment to building and sustaining open infrastructure that everyone can participate in and benefit from. We are a community where all research organizations can belong and where all research outputs, resources, and activities can be shared, discovered, and connected.","tags":["Strategy"],"title":"A New Membership Model for a More Equitable DataCite","updated_at":1775143649,"url":"https://datacite.org/blog/a-new-membership-model-for-a-more-equitable-datacite/","version":"v1"}],"out_of":49872,"page":1,"per_page":10,"total-results":49872}
