In 2021 I came across generative AI, first image, then text. This article is for my colleagues, the tutors at the Permaculture Association Britain in order to give them some insights into my thoughts about AI. This article was co-written by Lumia and Dominik and are our shared thoughts. 

In November 2024 an e-mail arrived at the permaculture tutors mailing list. One of the tutors was inquiring about how to handle AI written paragraphs within a diploma design. My initial response was – I don’t care – there are so many problems in the Nordic Permaculture community – harassment, discrimination, conflict of interest, bribery, etc. so that the "problem AI" would be just dwarfed by all the things I observe here.

But, colleagues are colleagues, and since the ball explicitly into my turf – I’m the author of the Digital Permaculture book – they deserve a proper answer and not just some few sentences in an e-mail.

Some history of modern AI

Some years back, in 2013, Facebook switched its sorting algorithm to a more sophisticated machine learning algorithm. Basically, as a Facebook user, we’ve been in contact with AI for a long time now. We didn’t call it AI then, but that’s essentially what it is. Their machine learning algorithm determines what we should see based on a mix of our behavior and signals from our network, like what our friends engage with. For example, let’s say you have a lot of permaculture friends, and some of them also happen to follow conspiracy theories or alt-right groups. Over time, the algorithm might show you content from those groups, not because you’re interested in it, but because the algorithm assumes these connections make it relevant. The same thing could happen with viral videos or unexpected content—it might be tied to what your friends interact with. These algorithms are also designed to keep you engaged, often making them addictive by triggering a dopamine-driven feedback loop that keeps you scrolling.

Social media algorithms are designed to keep you glued to the platform, and they’ve gotten very good at it. They figure out what grabs your attention and then serve you more of it—over and over again. It’s a dopamine hit, just like pulling a lever on a slot machine, and every swipe or scroll gives you that rush. Infinite scrolling and TikTok’s short videos are perfect examples—they’re designed to make you lose track of time. It’s not just about showing you content; it’s about keeping you hooked. The problem is, while you’re busy chasing the next "interesting" video, it chips away at your time, focus, and sometimes even your mental health. And for kids? They’re even more vulnerable to these tactics. These algorithms aren’t just curating content—they’re shaping behaviors, and it’s not always in a good way. In November 2024 Australia banned social media for under-16s.  

Machine learning sorting algorithms are already a nightmare for anyone trying to share meaningful content. Take Facebook, for example—if you post something genuinely valuable, like promoting a book (like my digital permaculture book) or sharing an educational resource, the algorithm will throttle its reach. Why? Because they want you to pay for ads. Instead of letting your content spread naturally, they suppress it unless you fork over money to boost it. Meanwhile, trivial or viral nonsense gets amplified because it keeps people scrolling. It’s frustrating to see how much control these algorithms have over what people see—it’s all about profit, not value.

But: it’s truly been a long way from sorting AI to generative AI.

History of generative AI

In 2017, Vaswani et al. published the groundbreaking paper "Attention is All You Need", which introduced the Transformer architecture. This innovation, using a mechanism called "attention," enabled models to focus on important parts of a sequence, vastly improving natural language processing (NLP). Today, all modern Large Language Models (LLMs), including GPT, are built on Transformer technology.

In 2018 OpenAI released their first GPT version (LLM). GPT stands for Generative Pre-Defined Transformer and it clearly states the nature of what it is. It can generate something, it was trained on something, and it uses transformers. The GPT-1s capability was limited to simple text generation – and far away from what we have now with GPT-4o (Nov. 2024).

In 2020 OpenAI released their GPT-3 version, which could now handle 175 billion parameters – compared to GPT-2 (released in 2019) with 1.5 billion parameters. This model enabled significantly more nuanced and complex text generation. GPT-3’s versatility made it capable of handling a wide range of applications, including conversation, translation and creative writing.

OpenAI refined their model further until they released the GPT-3.5 model in 2022. At that point in 2022 we were shortly before the change from linear to exponential growth. This is where normal media like newspapers, magazines, etc. picked up on AI. Suddenly we had an AI generated image win a prize and spark the ethical debate about AI, or a Google employee who thinks that AI has reached consciousness and of course all the other news about AI.

From then on AI has been included everywhere. Slack, the proprietary workspace used by many permaculture associations, integrated AI features. Zoom, the proprietary online conferencing tools used by nearly all permaculture professionals, integrated AI features. Canva, Miro, Google – to just name some – they all integrated AI features into their products. We now are more or less at the point of mass-adaption. It is now already hard to not get in contact with generative AI.

Then in 2024 the Nobel Prize in Physics was awarded to John J. Hopfield and Geoffrey E. Hinton “for foundational discoveries and inventions that enable machine learning with artificial neural networks”

The major problems with generative AI

I remember our PDC in 2022 at Beyond Buckthorns, when two of my students started a conversation about the Google engineer Blake Lemoine. He’d claimed that LaMDA from Google had achieved sentience. While his claims were widely debated and dismissed by the broader AI community, it was clear to me that AI was no longer a niche topic—it was here to stay and would soon become part of everyday conversations.

Since 1950, the gold standard for evaluating AI has been the "Turing Test", or "Imitation Game", named after the mathematician Alan Turing. The test assesses whether a machine can exhibit behavior indistinguishable from a human during a text-based conversation. If a human evaluator cannot reliably distinguish between human and machine, the AI is said to "pass" the test. ChatGPT has, in many casual interactions, left people convinced they were speaking with a human, suggesting it has "more or less" passed the Turing Test in practical terms, though not under formal evaluation.

But the Turing Test only measures conversation—what about content? AI has already reached a level where it can write text without typos, with flawless grammar, and in styles tailored to specific needs—all in multiple languages. While English is the most polished due to larger datasets, languages like Finnish and Chinese are improving rapidly. The challenges in nuance and idiomatic expression are shrinking, and it’s likely only a matter of time before AI reaches near-human proficiency in even the most complex languages.

AI can now generate images that look like photos—and with the right skills, a designer can make them pass for real. A German newspaper quoted someone saying, “when we now look at a photo, we ask ourselves first: could it have happened?” This sentiment captures the growing distrust in visual media. The German Photographic Society (DGPh) published a position paper on AI image generators, emphasizing that synthetic AI images are not photographs, licensing issues remain unresolved, and the source of training data is often unclear. For example, PCMag offers a test using MidJourney AI-generated images, demonstrating how tricky it is to spot the fakes. Go ahead - take the test.

But AI doesn’t stop at images. It can generate sound, mimic voices, and even sing. There’s already a Top 20 list of AI-generated songs—whether you like them or not, they exist. And AI can sound like anyone. Remember when OpenAI’s female voice sounded like Scarlett Johansson’s AI character from the movie Her? That resemblance sparked enough controversy that OpenAI reportedly pulled it back. AI-generated voices are now a contentious issue, especially in the gaming industry. In July 2024, SAG-AFTRA called for a strike against video game companies, demanding protections for actors and voice artists. After all, what would Cyberpunk 2077 be without Keanu Reeves as Johnny Silverhand, or Assassin’s Creed Mirage without Shohreh Aghdashloo’s legendary voice? Yet with AI, creating a synthetic version of any voice is only a few clicks away.

To sum it up: we can no longer trust what we read, see, or hear. Language, sound, and imagery have been unlocked by AI, reshaping the way we perceive reality itself.

Language is key

Since ancient times, human civilization has been built on language—we discuss, write, and read. From early childhood, we learn to write and are graded on how well we do it—in my school days, even our handwriting mattered. Some of us go on to write extensively during university, and a few write a PhD thesis—all text. Laws, policies, procedures, and statements? All text. The stock market, airline tickets, calendar bookings, and emails? All text. Even the permaculture diploma? Mostly text. At its core, language is the foundation of all social interaction.

And now, an AI can read, write, generate, and even discuss text, sometimes on a level indistinguishable from ours—and, in certain technical aspects, arguably better. While AI is still in its early stages, the implications are already significant—and troubling. Without comprehensive rules or oversight, risks like misinformation, unethical use, and economic disruptions loom large. Just remember how long it took the EU to bind big tech companies to the Digital Services Act (DSA). Now imagine looking a few years into the future—to a time when Artificial General Intelligence (AIG) might emerge. AIG would match or surpass human cognitive abilities across all domains. At that point, we might find ourselves longing for the "good old days," when language—and its creation—was still uniquely human.

Aspect

AI (Artificial Intelligence)

AIG (Artificial General Intelligence)

ScopeNarrow, domain-specific tasksBroad, adaptable across any domain
CapabilitiesOptimized for predefined functionsHuman-like reasoning and problem-solving
AdaptabilityLimited; retraining required for new tasksSelf-learning and general adaptability
ExistenceWidely implemented and in use todayHypothetical; does not yet exist
ComplexityTask-dependent, less complexRequires highly complex, unified cognition
ExamplesChatGPT, Siri, AlphaGoA hypothetical machine akin to HAL 9000 or "The Matrix" AI

But let’s not look to far into the future and concentrate on the now. 

Energy consumption

Training large language models (LLMs) like GPT-3 demands substantial energy. Estimates indicate that training GPT-3 consumed approximately 1,287 megawatt-hours (MWh) of electricity, equating to the annual energy usage of about 120 U.S. households.

Inference operations—where the model processes and generates responses—also contribute to energy consumption. Each query to models like GPT-3 consumes between 0.01 to 0.1 watt-hours (Wh), depending on the complexity of the task and the infrastructure used.

To address the escalating energy demands of AI operations, tech giants are exploring nuclear energy as a sustainable solution. In October 2024, Google announced to develop small modular reactors (SMRs) aimed at supplying up to 500 megawatts of carbon-free power to its data centers by 2035. Similarly, Amazon has invested in nuclear energy projects to meet its data centers' growing energy needs. 

Energy consumption of AI in the Diploma of Applied Permaculture Design

As we learned, a query at ChatGPT consumes about 0.01 to 0.1 Wh (watt-hours). We currently have in the Permaculture Association Britain about 150 students in training. They each are required to design 10 designs. That means we could have potentially 1500 designs to be handed in over the next years. Let’s calculate the amount of energy required if every design uses generative AI. We assume that one design requires about 20 to 40 queries.

AI usage in the Diploma of Applied Permaculture Design:

Energy per query0.1Wh0.1Wh
Students150
 
150
 
Design per student10
 
10
 
Queries per design20
 
40
 
Designs in total1500
 
1500
 

 

 

 

 

 
Energy per design2Wh4Wh
Energy per diploma20Wh40Wh

 

 

 

 

 
Energy per all students3000Wh6000Wh

In order to understand the dimension we have to see other examples of energy consumption in the digital space. 

Websites:

WebsiteCO2Green energyWhPage hits yearKwh
https://www.permaculture.org.uk0.5yes0.5120006
https://permaculture-network.eu0.08yes0.08120000.96
https://www.permakulttuuri.fi0.74yes0.74120008.88

Zoom calls:

EventParticipantsCall Duration (hours)Energy Consumption (Kwh)
Induction tutorial220.2142
Group meeting510.2795
Group meeting1010.559
Course2511.3975
1 day convergence100411.18
Diploma accreditation832.1544
Induction tutorial 150 times2232.13

E-Mail: 

WhatAmount of mailsEnergy in Wh
Mail to a friend10.1
Conversation on a mailinglist101
Multiple threads on a mailinglist10010

 

 

 
With attachment10.5
to multiple people10050
Newsletter to members1500750

Tools used for permaculture design implementation

Take, for example, the usage of heavy machinery. A caterpillar excavator digging a small pond landscape burns 1,200 liters of diesel, equivalent to 12,720 kWh. That’s enough energy to power a two-person household for 3.5 years. Or consider travel: if a student from Finland visits their tutor in Sweden twice a year by electric car, driving 715 km each way (also via the Turku–Stockholm ferry) at 16 kWh per 100 km, the trips would consume 458 kWh annually.

Travel for the Diploma

Now let’s compare that with alternative transport. If the student travelled by train instead, which consumes about 0.1 kWh per passenger-kilometer, those same 458 kWh could cover 4,580 km by train—enough for a one-way trip from Helsinki to Barcelona.

These examples show that the tools we choose and how we use them have a far greater impact than a few AI queries.

Training

LLM needs training. Training requires energy and of course data – lots and lots of both.

ChatGPT was trained on the following:

Type of DataDescription
Books and ArticlesA wide range of publicly available books, journals, and research papers across various disciplines.
Web ContentPublicly accessible websites, encyclopedias, news articles, and community discussions.
Code RepositoriesProgramming knowledge, including syntax and frameworks, from public repositories like GitHub.
Structured DataDatasets, tables, and knowledge graphs for factual and consistent responses.
OpenAI-Curated DataCustom datasets and synthetic examples created by OpenAI to guide behavior and ensure alignment.

It was, for example, also trained on the Wikipedia, which is of course an online encyclopedia that holds a lot of humanity’s knowledge – all licensed under Creative Commons. Other teaching materials included websites, news articles (not behind paywalls), code repositories, and so on. 

But not all data is like Wikipedia. Training AI with all sorts of content becomes “problematic” when news outlets, for example, sue for copyright infringement. Take the recent case of Canadian media suing OpenAI over the use of their content—unsurprisingly, this is largely about financial interests. 

And here’s the thing: copyright doesn’t really align with permaculture ethics. Fair Share would mean a copyleft approach instead—where knowledge is freely available for appropriate use. Data like Wikipedia fits that model perfectly. It’s there for everyone to use and share. Why shouldn’t other data sources work the same way? That’s the future we need to push for—one where knowledge sharing isn’t just legal, but ethical too.
 

Generative AI and permaculture Design

So, from 2020 on I was waiting for a conversation about AI to pop up in the permaculture community. Man, that’s a long waiting time. In November 2024, a question popped up in the Permaculture Britain tutor mailing list. A tutor asked about a student possibly using AI to write paragraphs in a diploma design. While this is one specific instance, it reflects a larger conversation that’s only just beginning in permaculture circles. Another tutor knew that I wrote the “Digital Permaculture” book and then the ball was in my turf. And then the waiting had an end.

The tutor's questions was very specific – it was about a student possibly using AI to write some paragraphs in a diploma design. The tutor spotted the potential AI usage because of a shift in tone, which can sometimes be a giveaway. However, as AI becomes more integrated into daily life, detecting its involvement will grow increasingly challenging. How do we deal with that as diploma tutors?

We have to understand that AI scales from a simple tool to a co-designer when it comes to design work. There are no one-size-fits-all answers (unless we prohibit its use completely in this area of life) but instead we have to differentiate between the what, when and why. 

There is an entire spectrum what the AI could handle for us. This goes from correcting grammar and typos of the text we’ve written, to putting data we’ve gathered in a proper format, to generating of a PMI for observed behaviour, to generating an entire analysis section out of a survey section, to co-designing an entire Diploma design. 

Where do we draw the lines? How much can a human co-designer contribute to a diploma apprentice’s design to have it still get approved as their personal work? 49%? How do we recognize that, if the apprentice chooses not to disclose it? What about other problems of “cheating” that arise in the Diploma process, like backdating designs (where eco-projects that were done without much knowledge in permaculture pre-diploma are stuffed with some permaculture jargon and submitted – and accepted – as diploma designs) and the above mentioned conflict of interest between tutors and apprentices (where the apprentice has leverage over the tutor). How can we recognize and address those issues? Like said, no easy answers.

We need to come back to the essence: what is the Diploma, and why do people want to have it. If I understand correctly, it’s first and foremost a learning experience. People want to embark on the journey because they want to improve their lives, the world around them and learn new skills for a new part of their lives. What has the student who cheats not understood about this? They are only cheating themselves.

Really, if we’re being honest, the Diploma doesn’t mean much when you step out of the permaculture bubble. Why would anyone cheat in something they’re doing for themselves? In the Nordics we have the extra layer of scarcity of teachers and practitioners which seems to lead to a wider scarcity mentality, competition and a hunger for power, which might explain the wider array of problems we see with diploma apprentices here. It might be a good idea to look deeper into the why of it all in the UK, too. 

Of course it could also be just that the tutor's student didn’t even think that they’re using AI, as it’s become such a part of daily life. We are so good in observing and analyzing our projects, but often forget to observe and analyze our own behaviours, even while working on the Diploma.

The first thing would be in my opinion to ask the apprentice if indeed they used AI and if they did, in what depth (rewriting text with better grammar, or having AI write the passages on its own), and why they didn’t think of listing AI in the tools section. The answers will teach us a lot.

In order to see where we currently stand we can use a PMI on the general usage of AI for a permaculture design in the Diploma. 

Plus (Advantages)Minus (Disadvantages)Interesting (Points of Curiosity)
Enhanced creativity: Generates fresh ideas and alternative perspectives.Risk of over-reliance: May reduce hands-on observation and critical thinking.Learning tool: Could AI act as a mentor?
Time efficiency: Quickly creates visualizations, diagrams, and drafts.Environmental impact: High energy consumption conflicts with Earth Care ethics.Design innovation: AI might uncover unexpected synergies or patterns.
Access to knowledge: Provides instant access to permaculture-related information.Loss of authenticity: Could dilute the uniqueness of the apprentice’s work.Shaping the future: How will AI's role evolve in the permaculture community?
Skill augmentation: Improves technical skills like visualization and reporting.Knowledge gaps: AI lacks local context, leading to impractical suggestions.Ethical dilemmas: Will AI use spark broader debates about its place in sustainable practices?
Language support: Assists non-native speakers in articulating ideas better.Cost and accessibility: Expensive tools may create inequality among apprentices.Skill shift: How can apprentices balance AI use with traditional permaculture skills?
Feedback and iteration: Offers immediate suggestions for improvement.Ethical concerns: Biases, data security, and transparency issues in AI usage.
 

Like in all our designs we would weight the pro against the cons and see what is what. There are benefits. So we need to address the problems and come up with solutions. 

Guides for AI in permaculture

In order to solve the ethical problem we should use Earth care, People care and Fair share to see how ethical AI usage could look like. Here is my take on an Ethical AI guideline:

  1. Use AI as a tool
    AI should support human creativity and effort, not replace the critical processes of observation, analysis, and design.
  2. Transparency
    Clearly disclose when and where AI was used in the project, whether for generating ideas, visualizations, or text. It’s like the image caption / image source rule
  3. Limit environmental impact
    Use energy-efficient or eco-conscious AI platforms, and use AI sparingly to minimize energy consumption.
  4. Ensure accessibility
    Prefer open-source AI tools to avoid exclusion of students
  5. Equality
    We need to make sure that students know it is OK to use – and that we have guidelines for them.
  6. Foster Learning and critical thinking
    Encourage students to critically evaluate AI suggestions and understand the reasoning behind decisions, rather than accepting outputs at face value. AI is often hallucinating.
  7. Respect privacy and data
    Avoid using AI tools that require uploading sensitive or personal information without ensuring it will be handled securely.
  8. Support the Permaculture Ethics
    Align AI usage with permaculture principles, emphasizing ecological integrity, social equity, and resource sharing.
  9. Avoid over-reliance
    Balance AI use with hands-on, local, and experiential learning to maintain authenticity in permaculture designs.
  10. Reflection
    If AI has been used in a design it needs to be clear for the tutor where and when (transparency) it was used but we also need the student to reflect on the usage – like we would require for any other tool. 

Conclusion

AI’s energy consumption is often used as a negative in the analysis of AI. The problem is that most of the time the claims are not facts based. The energy consumption from many tutors’ digital tools is far bigger than that of the student’s AI queries.

As for the copyright problem I would suggest to look from a different perspective. We need more open knowledge that can freely be distributed – copyleft offers an alternative to our current approach.

AI is here. It is a tool. Tools are never neutral. Its usage determines whether it is good or not. Banning AI in the Diploma of Applied Permaculture Design will not work. Restricting its usage will not work. As tutors we need to get clarity about its usage and the student should reflect on it. Like on any other tool.

Otherwise we should discuss other tools as well. Will we allow the usage of excavators for pond digging, or will we allow the buying of a new electric car as the outcome of a permaculture design? How much travel to get a diploma is still OK? How many times per year could someone fly to the UK to meet the tutor before we need to set rules? These are all valid questions – but do we really want to make strict rules about them, effectively telling our students we don’t trust their judgement.

When it comes to AI, I suggest we don’t jump into conclusions, but use our permaculture skills – observation, analysis and informed decision-making – as we do with everything else in our lives.