• 0 Posts
  • 13 Comments
Joined 1 year ago
cake
Cake day: June 6th, 2023

help-circle
  • I sympathize with the cynicism in your last paragraph, but I push a little optimism back on a couple points. 1: our capability for speech may be limited by the corporations who have grabbed control over our media platforms, but insofar as freedom of speech refers to our ability to speak freely without retaliation from the government, we do still have real free speech. It’s a juvenile point, but given events in the last few years it’s not a right I take for granted as I did previously. That being said, I did just watch a video of FBI agents interrogating a woman in front of her house for posting non-violent content on Facebook relating to Gaza that you can add to a pile of evidence that the government is frequently toeing the line on free speech, so… that’s not good.

    2: Regulatory authority has become almost laughably meek, granted, but you’re commenting on a video of one of the most aggressive regulators to hold the position in as long as I’m aware. This is a powerful sign that regulatory capture is not inevitable if we care enough to vote for candidates who will appoint strong regulators – even if it hurts our pride to do so (<<conscientious vote objectors).






  • At some point, if the people don’t say enough, let it all come crashing down, and rebuild a more equitable system

    Who’s organizing this? Your withdrawal isn’t part of some collective movement, it is solitary and impotent. It is harmful. There’s no board room hosting the league of evil medical barons, no agenda item about how Allonzee stopped donating blood so they’d better start taking things seriously. You’re not changing anything, and squeezing sick people isn’t a necessary step in effecting change anyway.

    There’s no “we” and there’s no collapsing. These systems persist despite your personal rebellion, because they’re really good at persisting until faced with overwhelming collective action. Until then, fucking help people.

    Sorry friend, it’s not really about you. But a lot of good people who care about inequity have let their pride convince them that systems will implode if they personally choose to stop participating in them. That’s not how it works. The only people feeling any hurt are the innocents in your own communities.


  • Seriously, I get it. It’s fucking infuriating, but again…what’s the alternative? Is there some way in which this moral rigidity is not holding patients hostage in an impotent effort to force change in a broader healthcare industry?

    There are alternative and more effective methods of effecting change that don’t involve sacrificing life or well-being. I implore anyone who’s rightfully disgusted by this reality to grit your teeth and help people however you can, and direct your ire where it’s best deserved.


  • The next link in the donation delivery chain is unrelated? Agree to disagree.

    Forgive me, but this is misguidedly reductive. No healthcare is provided in the US, by providers, without being subjected to capitalist exploitation. If I understand your thought process, a collective of the best pharmaceutical scientists in the world could create a completely non-profit pharmaceutical NGO, design and manufacture life-saving drugs, and give them away to hospitals (or sell them at-cost). But so long as hospitals then charge profit rates for those drugs, it would be ethically indefensible to financially support the NGO?

    Is that not holding patients hostage in an impotent effort to force change in the broader healthcare industry? I donate to my local non-profit blood center, who (assuming they’re similar to ARC) sells my blood to local hospitals at-cost, and then my blood is used to save a patient in need. The patient will then be responsible for paying the hospital exorbitant sums for my blood (from which the blood center doesn’t benefit) and all the other services it provides, but what’s the alternative?

    Edit: would it make a difference if the blood center didn’t charge hospitals for the blood, even though the hospital will still charge patients?



  • The cheats thing is really irritating. When I replay a game I prefer to skip as much tedium as I can, because even when it’s enjoyable the first time, on replay it starts to feel like… tedium.

    I’ll use new game plus for this when it’s offered (Last of Us 1 & 2, for instance), but lately I’ve been relying on cheats if needed. I just replayed Control this way and it’s just such a smoother experience. I don’t need to slog through the slow strength building, just let me hit all the story beats.




  • I…don’t think that’s what the referenced paper was saying. First of all, Toner didn’t co-author the paper from her position as an OpenAI board member, but as a CSET director. Secondly, the paper didn’t intend to prescribe behaviors to private sector tech companies, but rather investigate “[how policymakers can] credibly reveal and assess intentions in the field of artificial intelligence” by exploring “costly signals…as a policy lever.”

    The full quote:

    By delaying the release of Claude until another company put out a similarly capable product, Anthropic was showing its willingness to avoid
    exactly the kind of frantic corner-cutting that the release of ChatGPT appeared to spur. Anthropic achieved this goal by leveraging installment costs, or fixed costs that cannot be offset over time. In the framework of this study, Anthropic enhanced the credibility of its commitments to AI safety by holding its model back from early release and absorbing potential future revenue losses. The motivation in this case was not to recoup those losses by gaining a wider market share, but rather to promote industry norms and contribute to shared expectations around responsible AI development and deployment.

    Anthropic is being used here as an example of “private sector signaling,” which could theoretically manifest in countless ways. Nothing in the text seems to indicate that OpenAI should have behaved exactly this same way, but the example is held as a successful contrast to OpenAI’s allegedly failed use of the GPT-4 system card as a signal of OpenAI’s commitment to safety.

    To more fully understand how private sector actors can send costly signals, it is worth considering two examples of leading AI companies going beyond public statements to signal their commitment to develop AI responsibly: OpenAI’s publication of a “system card” alongside the launch of its GPT-4 model, and Anthropic’s decision to delay the release of its chatbot, Claude.

    Honestly, the paper seems really interesting to an AI layman like me and a critically important subject to explore: empowering policymakers to make informed determinations about regulating a technology that almost everyone except the subject-matter experts themselves will *not fully understand.