sjmcmahon 20 hours ago

It's probably worth noting that there's a lot of discussion about challenges reproducing the workflow of this paper, and that as-described it seems to suffer from data leakage, so much so that you can replace sections of their algorithm with random initialisation and get at least as good results. See, e.g.:

https://pubpeer.com/publications/C8CFF9DB8F11A586CBF9BD53402...

Having been on both sides of the reviewing process, it seems incredibly difficult to get good peer review of data-intensive studies in medicine, as few people have the time to really dig into the detail of these models.

aydyn a day ago

Sounds promising but still needs to be independently validated. Its always wise to take AI medical research with a grain of salt.

  • TechDebtDevin a day ago

    Its Harvard, I would literally be more excited for the same announcement from the University or Wyoming.

    >Sounds promising

    More fabrications from one of the biggest grifting institutions on Earth, Harvard.

    So sick of their name even having merit. They literally license their name to sell fake textbooks at airports. Why are they even allowed on here.

    • neom 20 hours ago

      "They literally license their name to sell fake textbooks at airports."

      ...huh? Can't find anything about this on google.

    • Alifatisk a day ago

      Is Harvard that bad? I assumed they were prestige.

      • next_xibalba 15 hours ago

        There has been a lot of news about academic fraud at Harvard lately (although some cases date back decades). It’s pretty bad when the leader of the institution gets busted for it. Harvard’s reputation is in free fall. Anytime I see Harvard attached to some big announcement, I just assume the result has been p-hacked, exaggerated, or otherwise manipulated.

      • InkCanon 21 hours ago

        The root of prestige is the Latin word praestigium, which means an illusion or delusion. One of the most poetic pieces of etymology in todays society.

      • LoganDark a day ago

        That's what they want you to think. I mean this entirely non-sarcastically. I don't know exactly how bad they are or aren't, but they work hard to look like prestige.

theptip 17 hours ago

> CHIEF achieved nearly 94 percent accuracy in cancer detection and significantly outperformed current AI approaches across 15 datasets containing 11 cancer types

I would have thought that performance vs. human level is the most interesting benchmark? Maybe that is covered in the Nature article.

BaculumMeumEst 18 hours ago

Expecting the code to be plagiarized and for the results to fail to replicate

FerretFred 15 hours ago

UK reader here... As someone who's lived with a dear friend's cancer and affects of cancer for the past few years, I'd say that early/earlier diagnosis by whatever method is to be welcomed. However, if you then can't treat the disease then surely the early diagnosis will be for nothing? Sadly, with incidences of cancers of all sorts apparently increasing exponentially, wordldwife, not having the means to treat it is heartbreaking? Actually, as much money and effort needs to be invested in finding the cause of various cancers as curing it (or trying to cure it).

HexDecOctBin 16 hours ago

> Title: A New Artificial Intelligence Tool for Cancer

Curing it or causing it? Because that's the typical AI marketing appeal, no? "It will kill everyone, except our customers."

chefandy 21 hours ago

Good thing silicon valley is pumping billions of dollars and burning through unimaginable natural resources in the midst of a climate crisis to make systems that compete with commercial artists by selling cheap knock-offs of their artwork and relieving us of the burden of doing things like writing school papers or heartfelt personal correspondence or making animated avatars for instant messaging. There's money to be made, so why use this amazing new technology to solve humanity's actual problems instead of just shoving a bunch of mediocre who-gives-a-shit features into people's phones?

  • throwaway918299 19 hours ago

    I'm more skeptical than most on the current wave of AI tech innovation.

    However, believe it or not, humanity can collectively work on different things at the same time. And the people putting emoji generators in phones are probably not the people I would want doing cancer research. And many many things that we rely on today were not directly created by research in those topics and were born from innovation in other unrelated areas.

    • chefandy 17 hours ago

      You don't feel that the astonishing amount of resources poured into current consumer level AI products is different?

      • borski 16 hours ago

        No. We poured similarly large amounts of resources into hundreds of companies in the dotcom boom, crypto, and so on.

        This is a phase, just like many others, and will pass.

        AI and LLMs will stick around and be important. The hype? That will die in favor of something else.

        • chefandy 12 hours ago

          "It's what we've always done" is a classic non-argument against doing something. Is there an amount we could spend on something that essentially winds up being useless that you think would be bad? Do you not think there's a trade-off at some level about the sort of things people invest in?

          • borski 12 hours ago

            That's not the argument I made. You were responding to an argument that "humanity can collectively work on different things at the same time," and "many things that we rely on today were not directly created by research in those topics and were born from innovation in other unrelated areas."

            Your response was "You don't feel that the astonishing amount of resources poured into current consumer level AI products is different?"

            To which I responded that no, I don't feel that the amount of resources poured into current consumer level AI products is different; it is the same as it has always been.

            That is not the same as making an argument that that is how it should be.

            • chefandy 10 hours ago

              Sure, that's not the point of what you said but it's the premise.

              > We poured similarly large amounts of resources into hundreds of companies in the dotcom boom, crypto, and so on.

              I don't agree with your ostensible implicit assumption that in this iteration of this cycle a) the economic and social costs are not consequential enough to care, (e.g. the social impact of how much easier scamming people is with these extremely capable base models,) b) the costs are comparable to those other things, (e.g. Goldman Sachs says we're looking at a 160% increase in data center energy usage driven by AI) and c) ignoring it will have the same negligible consequences that ignoring those other things did.

        • clcaev 15 hours ago

          In each round of expansion more externalities happen; we just fail to tax the externalities to reflect the real world consequences.

          • borski 15 hours ago

            I’m not sure I understand what you mean. Could you clarify?

  • alehlopeh 20 hours ago

    Any idea when it won’t be the midst of a climate crisis? Oh ok.

    • chefandy 17 hours ago

      Surely the nihilist approach will be an effective solution.

  • suby 20 hours ago

    I'm confused by your comment. This is an article about AI potentially helping with treatment of cancer.

    I strongly disagree in any case that we shouldn't be investing in AI. I think that there's likely a rising tides lifts all boats effect that occurs with improving AI in general -- it all feeds into each other. People who work on image generators are building expertise in AI and can potentially discover things which advance the field, or inspire others to work in the field. Humanity getting better at making AI generate images, or compete at video games, or predict protein folding, it all probably contributes to the rate of improvement in AI.

    And it's not unreasonable to think that AI will one day solve problems like cancer or climate change, so we should very much care about the rate of AI improvement. As for the power concerns, ironically, thanks to the increased demand we are seeing companies like Microsoft and Oracle make large investments in nuclear power now. It could be the case that this kicks starts a boom in the nuclear industry which eventually brings costs / eases regulations enough to put us in a better place in the long run.

    It's extremely complex, I don't think we can say at this point that investing in AI in the midst of a climate crisis is a mistake.

    • chefandy 17 hours ago

      You misread my comment. I think solving problems like finding genetic patterns in cancer is exactly what we should be investing in. That is not even close to the largest resource sinks for AI model training right now. In the resources were going into generally making AI better, great. They're not. They're going until training models for mediocre consumer "gee-whizz aint that neat" products.

      • evantbyrne 17 hours ago

        I think people may not be aware of how much medical research is happening right now based around ML. It is a key component of our liquid cancer biopsy. If anything, there might be a bit too much hype-driven development at the moment. I would be cautious of any ML-based diagnostic that isn't leveraging the technology to better understand the biological aspects of cancer itself.

        • chefandy 12 hours ago

          Great. I'd rather add a drop in that bucket than have another model trained to make my phone do something I wish it didn't do.

      • borski 16 hours ago

        That’s not true. That’s just what you see.

        There is tons of research going on using AI for a lot more than memes.

        • chefandy 12 hours ago

          I'm not implying that there isn't money going into AI in medical research, and a bunch of other worthy pursuits, also. However, there's also an extraordinary amount of resources going into dumb shit that nobody wants that could actually benefit humanity. Not a trivial sum for a test, not a large sum that will contribute to entertainment-- it's the equivalent of an airport gift shop trinket. The person receiving the useless bauble would be better off without it but the giver paid $20 for it so...