I'm surprised at the negative comments just because you're trying something new. It's an experiment that is possible because AI - a new tool for humanity to use - is now available. Please continue to exploring and following your curiosity - that's what got you here in the first place. I love what you're doing with The Weekly Anthropocene! Wishing you the best!
What tool? What problem does “AI” actually address that we need to solve. The things it does, writing and art, are not things we need to replace.
Any glance into the source of AI shows that it is something billionaires are selling to try to create a version of the internet they can personally profit off of and control.
Did you all even read the article? Sam isn't using AI to write his articles. He is using AI to create a space where his readers can ask it questions REGARDING the articles he has ALREADY written. The AI looks through his ALREADY WRITTEN articles (historic data) and answer any questions that we may have based on the knowledge he has already provided in those articles. There is no NEW CONTENT here that's being generated by the AI.
Maybe an example will clear things up: I can ask AnthropoceneGPT this question - has Weekly Anthropocene ever covered the country of, say Mozambique. If Sam has written about Mozambique in any of his previous posts, then the AI should be able to fetch that (how wonderful that you can do that without having to scan through every one of his posts)! If he hasn't written about Mozambique, the AI will return nothing. It definitely won't generate something artificial, if that's what y'all are concerned about.
I'm extremely disappointed and cancelled my paid subscription. Rather like Earth Hope using terrible AI art, I find this move at odds with your message. It may use less energy than the microwave, but that feeds someone. This drive to use LLM for instead of existing solutions serves no purpose except to enrich tech billionaires further.
We do need to keep an eye on the usage of resources, but printing is faster than handwriting, and the Internet is faster than printing. We need to ponder the trade offs, not reject out of hand.
Fascinating. I generally dislike the AI summaries from a Google search because I feel like I need to click through to each source to see where it’s getting its data from and whether that is a reasonable source. For something like this, however, I can see it making a lot of sense, because you’re drawing from a known selection of sources.
As a grad student, this function of AI has saved me tons of hours. By uploading all my sources to an LLM, I've been able to find stuff I remembered reading at one point in a breeze. Especially for legal citations (which have to reference the specific pages), this sort of tool is just incredibly valuable. And the fact that you can discuss your line of thinking re the the uploaded documents with the LLM to check the strength of your understanding and analysis is great too.
All the reflexive anti-AI comments on here are weird.
I did a dive on microplastics to remind myself how you have addressed the problem in past issues. I was able to skim through the nature of the problem, photomicrographs showing the diversity of sources of microplastics in water, and finished with China's experience with phytoremediation using water hyacinth. The ability to connect to the original piece that you wrote is a real plus! I'm glad I could go back to read it. I would certainly use this again. It is like having a personal librarian for the Weekly Anthropocene Library. Lately, AI summaries pop up automatically on my search engine and I am NOT a fan. My searches are for verifiable data from trusted, peer-reviewed sources. AI does not qualify and can't be trusted. I distribute AI-generated 'scientific' summaries for the students I mentor to dissect. The summaries also serve as examples of how NOT to write a scientific paper.
I’m not inclined to trust this. My main objection to generative language AIs, as a scientist, is the fact that they often provide baseless or false information.
I don’t believe that being trained solely on newsletter content will address the information hallucination problem.
Well for my part (not really well informed about IT), if it’s an available useful tool then I am glad if it’s a help to you in communicating an important message. I will take a look at the general issue of computer assistance in publishing now.
You can't use ChatGPT, no matter how tiny the individual footprint, without being part of and building on all the environmental damage that the infrastructure used to create has wrought. This seems to me like someone who only bikes and walks for environmental reasons suddenly saying "well, everyone is using cars, so to know about cars and the potential damage they do, I guess I should go buy one and use it".
AI is good and bad, much as the Internet is. And I certainly think we need to keep talking about its uses, especially as it has great potential for mitigating climate change. As well as other positive uses for humans.
I had a quick go on it and was impressed at how fast it replied to my question about the impact of dogs on the environment. There’s not much data in your newsletters on that I imagine, but it gave me a short piece that you’ve written about wolves.
As a cartoonist, I am angry at how my work has been used to train AI without consent or compensation. But I am also using it to help me do mundane admin stuff and leave me more time to be creative.
Whatever we think about AI it’s not going to go away.
I have to tell you Sam I opened this when it landed for me last night (I am in Australia)and I was like no wayyyy - as I was closing down my laptop on building out my own private GPT. I didnt respond then I wanted to see what more comments may come. It is very interesting. As you can see I am all about LIving Systems - so nature and Life leading all the way. I havent 'struggled' with AI I have been really humming it over, reading up and whilst funnily enough I dont use it - I dont have need to ask it things - I knew the best way for many to truly understand my work, would be to offer a space where they can ask anything around living systems and it responds. It is based on my work alone. It doesnt feed of any other sources, so it allows my peeps to truly reach in and ask questions in their own way and then even reach out to me for more clarity. I love what this offers. I love that it is based on your work - for people to be able to search your work. AND I know that for us to be able to reach a middle ground of how we move into living a life more aligned with life and morally able to work for all - we will need AI. The issue ISNT AI the issue is humans and our inability to utilise something (any of our creations) within a moral and valuable way that doesnt lead to addictions or capitalism. So lets see what it brings I cant wait to hear how it goes
I will ask, out of genuine curiosity, where you think this leads.
If we have people uncritically supporting a single source of information not derived from any knowable people, but instead a machine who’s sources and information are infinitely corruptible and infinitely controllable.
Centralized, unlike with the Internet, into the hands of the wealthy tech billionaires and utilized exclusively to replace people at doing the thing they love best.
The “AI” is worthless Sam. I used to use it before realizing it just hallucinated crap all the time. And if I corrected it, it would just “yes man” me and move on.
I’ve got to know, why create a tool that’s supposed to replace you?
The single most disappointing thing you’ve ever done. AI has no significance. It is a thing prompt up by delusional tech billionaires trying to create a centralized technology, an “internet 2.0” that they can control. It’s gonna flop, just like crypto and just like Metaverse.
“The internet, but centralized into the hands of billionaires and with a massive resource sink.” Is probably the worst idea we’ve seen from them yet.
This leaves me incredibly, incredibly disappointed. I think this newsletter is absolutely essential. But you are buying into a technology that’s built on lies and possesses no ability to help us. Unless of course, you’re a billionaire that wants to replace artists and writers.
Again, immensely disappointed and frustrated with this.
I'm surprised at the negative comments just because you're trying something new. It's an experiment that is possible because AI - a new tool for humanity to use - is now available. Please continue to exploring and following your curiosity - that's what got you here in the first place. I love what you're doing with The Weekly Anthropocene! Wishing you the best!
What tool? What problem does “AI” actually address that we need to solve. The things it does, writing and art, are not things we need to replace.
Any glance into the source of AI shows that it is something billionaires are selling to try to create a version of the internet they can personally profit off of and control.
Did you all even read the article? Sam isn't using AI to write his articles. He is using AI to create a space where his readers can ask it questions REGARDING the articles he has ALREADY written. The AI looks through his ALREADY WRITTEN articles (historic data) and answer any questions that we may have based on the knowledge he has already provided in those articles. There is no NEW CONTENT here that's being generated by the AI.
Maybe an example will clear things up: I can ask AnthropoceneGPT this question - has Weekly Anthropocene ever covered the country of, say Mozambique. If Sam has written about Mozambique in any of his previous posts, then the AI should be able to fetch that (how wonderful that you can do that without having to scan through every one of his posts)! If he hasn't written about Mozambique, the AI will return nothing. It definitely won't generate something artificial, if that's what y'all are concerned about.
clap clap clap
I'm extremely disappointed and cancelled my paid subscription. Rather like Earth Hope using terrible AI art, I find this move at odds with your message. It may use less energy than the microwave, but that feeds someone. This drive to use LLM for instead of existing solutions serves no purpose except to enrich tech billionaires further.
We do need to keep an eye on the usage of resources, but printing is faster than handwriting, and the Internet is faster than printing. We need to ponder the trade offs, not reject out of hand.
I unsubscribed for using and promoting AI content.
Fascinating. I generally dislike the AI summaries from a Google search because I feel like I need to click through to each source to see where it’s getting its data from and whether that is a reasonable source. For something like this, however, I can see it making a lot of sense, because you’re drawing from a known selection of sources.
As a grad student, this function of AI has saved me tons of hours. By uploading all my sources to an LLM, I've been able to find stuff I remembered reading at one point in a breeze. Especially for legal citations (which have to reference the specific pages), this sort of tool is just incredibly valuable. And the fact that you can discuss your line of thinking re the the uploaded documents with the LLM to check the strength of your understanding and analysis is great too.
All the reflexive anti-AI comments on here are weird.
I did a dive on microplastics to remind myself how you have addressed the problem in past issues. I was able to skim through the nature of the problem, photomicrographs showing the diversity of sources of microplastics in water, and finished with China's experience with phytoremediation using water hyacinth. The ability to connect to the original piece that you wrote is a real plus! I'm glad I could go back to read it. I would certainly use this again. It is like having a personal librarian for the Weekly Anthropocene Library. Lately, AI summaries pop up automatically on my search engine and I am NOT a fan. My searches are for verifiable data from trusted, peer-reviewed sources. AI does not qualify and can't be trusted. I distribute AI-generated 'scientific' summaries for the students I mentor to dissect. The summaries also serve as examples of how NOT to write a scientific paper.
I’m not inclined to trust this. My main objection to generative language AIs, as a scientist, is the fact that they often provide baseless or false information.
I don’t believe that being trained solely on newsletter content will address the information hallucination problem.
Well for my part (not really well informed about IT), if it’s an available useful tool then I am glad if it’s a help to you in communicating an important message. I will take a look at the general issue of computer assistance in publishing now.
In 2023, data centers consumed 4.4% of U.S. electricity—a number that could triple by 2028. https://iee.psu.edu/news/blog/why-ai-uses-so-much-energy-and-what-we-can-do-about-it
Penn State is pretty credible, and that is just one recent article.
Thanks. That’s a helpful start. There are so many facets to this.
You can't use ChatGPT, no matter how tiny the individual footprint, without being part of and building on all the environmental damage that the infrastructure used to create has wrought. This seems to me like someone who only bikes and walks for environmental reasons suddenly saying "well, everyone is using cars, so to know about cars and the potential damage they do, I guess I should go buy one and use it".
AI is good and bad, much as the Internet is. And I certainly think we need to keep talking about its uses, especially as it has great potential for mitigating climate change. As well as other positive uses for humans.
I had a quick go on it and was impressed at how fast it replied to my question about the impact of dogs on the environment. There’s not much data in your newsletters on that I imagine, but it gave me a short piece that you’ve written about wolves.
As a cartoonist, I am angry at how my work has been used to train AI without consent or compensation. But I am also using it to help me do mundane admin stuff and leave me more time to be creative.
Whatever we think about AI it’s not going to go away.
Thank you..
Sam, I'm thrilled you'd found such a great collaborator.
I have to tell you Sam I opened this when it landed for me last night (I am in Australia)and I was like no wayyyy - as I was closing down my laptop on building out my own private GPT. I didnt respond then I wanted to see what more comments may come. It is very interesting. As you can see I am all about LIving Systems - so nature and Life leading all the way. I havent 'struggled' with AI I have been really humming it over, reading up and whilst funnily enough I dont use it - I dont have need to ask it things - I knew the best way for many to truly understand my work, would be to offer a space where they can ask anything around living systems and it responds. It is based on my work alone. It doesnt feed of any other sources, so it allows my peeps to truly reach in and ask questions in their own way and then even reach out to me for more clarity. I love what this offers. I love that it is based on your work - for people to be able to search your work. AND I know that for us to be able to reach a middle ground of how we move into living a life more aligned with life and morally able to work for all - we will need AI. The issue ISNT AI the issue is humans and our inability to utilise something (any of our creations) within a moral and valuable way that doesnt lead to addictions or capitalism. So lets see what it brings I cant wait to hear how it goes
In what world do you live in Mr. Anthropocene???!!! In 2023, data centers consumed 4.4% of U.S. electricity—a number that could triple by 2028. https://iee.psu.edu/news/blog/why-ai-uses-so-much-energy-and-what-we-can-do-about-it
Penn State is pretty credible, and that is just one recent article.
I will ask, out of genuine curiosity, where you think this leads.
If we have people uncritically supporting a single source of information not derived from any knowable people, but instead a machine who’s sources and information are infinitely corruptible and infinitely controllable.
Centralized, unlike with the Internet, into the hands of the wealthy tech billionaires and utilized exclusively to replace people at doing the thing they love best.
The “AI” is worthless Sam. I used to use it before realizing it just hallucinated crap all the time. And if I corrected it, it would just “yes man” me and move on.
I’ve got to know, why create a tool that’s supposed to replace you?
The single most disappointing thing you’ve ever done. AI has no significance. It is a thing prompt up by delusional tech billionaires trying to create a centralized technology, an “internet 2.0” that they can control. It’s gonna flop, just like crypto and just like Metaverse.
“The internet, but centralized into the hands of billionaires and with a massive resource sink.” Is probably the worst idea we’ve seen from them yet.
This leaves me incredibly, incredibly disappointed. I think this newsletter is absolutely essential. But you are buying into a technology that’s built on lies and possesses no ability to help us. Unless of course, you’re a billionaire that wants to replace artists and writers.
Again, immensely disappointed and frustrated with this.
I’m not unsubscribing yet, but I won’t be using this AI product.