Tag Archives: AI

Create your own Coloring Pages with AI

Coloring pages may bring to mind childhood afternoons filled with crayons, construction paper, and cartoon animals. But far from being just a nostalgic pastime, coloring has evolved into a multi-faceted educational and wellness tool. From preschool classrooms to graduate-level STEM programs, coloring pages are proving their worth in surprising and impactful ways.


The Foundations: Coloring in Early Education

Coloring has long held a valuable place in early childhood education. For Pre-K through elementary students, the benefits are well-documented:

  • Motor Skills Development: Gripping crayons and staying within lines strengthens hand muscles and improves fine motor control, which is foundational for writing.

  • Hand-Eye Coordination: Coloring encourages visual and spatial reasoning as children learn to coordinate sight and hand movement.

  • Color Recognition & Creativity: It’s an early introduction to color theory, symmetry, and emotional expression.

These early coloring activities aren’t just fun—they lay groundwork for cognitive and physical development that supports academic readiness.

Generated image showing a kids coloring page with mythical creatures


Coloring for Wellness: The Adult Coloring Book Trend

Coloring isn’t just for kids anymore. Over the past decade, adult coloring books have surged in popularity, often found alongside mindfulness journals and stress relief guides.

  • Mental Health Support: Coloring is meditative. It offers a structured yet creative outlet that helps reduce anxiety and promotes focus.

  • Digital Detox: In an age of constant screen time, analog activities like coloring offer a necessary and calming break.

  • Creativity for All: Adult coloring books span themes like architecture, nature, literature, and fantasy—inviting users to re-engage with play and imagination.

Generated image that says Mental Health Matters surrounding with fancy floral patterns to color in.

 


Coloring in Higher Education: A Study Tool with Serious Potential

What may surprise many is how coloring pages are being used as study tools in secondary and post-secondary education. When applied strategically, coloring supports memorization, spatial reasoning, and content engagement across disciplines:

Science and Medicine

  • Anatomy Diagrams: Coloring bones, muscles, and internal organs reinforces spatial understanding of the human body.

  • Biology Processes: Cellular structures, DNA replication cycles, and taxonomies become more tangible when color-coded.

  • Chemistry: Molecular diagrams, periodic tables, and lab safety signs are easier to recall when visually and kinesthetically processed.

Neuroscience and Psychology

  • Brain Mapping: Coloring different lobes and functional areas supports recall and understanding of cognitive pathways.

  • Behavioral Models: Reinforce theories like Maslow’s hierarchy or Pavlovian conditioning using symbolic visuals.

 

behaviour coloring example

Geography and History

  • Map Coloring: Students studying political boundaries, biomes, or historical trade routes engage more deeply by physically interacting with the content.

  • Historical Timelines: Color-coded eras or civilizations help clarify chronology and cultural evolution.

Engineering and Architecture

  • Mechanical Diagrams: For fields like mechanical, civil, or electrical engineering, coloring gears, circuits, or CAD printouts helps students see system relationships.

  • Blueprints and Drafts: Color can be used to identify materials, stress points, or workflow systems.

Fine Arts and Humanities

  • Masterpiece Studies: Recoloring works by Van Gogh or Kandinsky helps students internalize techniques, palettes, and emotional tones.

  • Literature & Language: Use thematic coloring to analyze character development, plot structures, or vocabulary clusters.

coloring example of protein folding

 

 


Why Coloring Works as a Cognitive Tool

  • Multisensory Learning: Coloring combines visual, tactile, and kinesthetic inputs—an ideal blend for many learners.

  • Memory Encoding: The act of choosing and applying color helps encode information more deeply than passive reading.

  • Pattern Recognition: Regularly coloring systems or diagrams builds a strong visual memory for complex patterns.


By using AI to generate custom coloring pages, we can now move far beyond cartoon animals and into a world of personalized learning, interactive study tools, and mindful relaxation—all tailored to age, subject, and interest.


Here is a step by step example of using Chat GPT for making your own coloring pages:

Chat GPT opening prompt, "What can I help with?" showing the icon to upload a file.

Click the + sign in the prompt text entry window.

expanded mouse hover over + view

Click Add photos and files

Image of the attached photo with prompt "please make this into a coloring page for me"screenshot showing the photo with the returned coloring page.

Click share (the arrow going up)

screenshot showing share options Copy Link and Download

(You can also drag photos/illustration to the prompt window and drag them back out to your desktop when they are finished being created if that works for you.)

Combining 3D Printing and AI

As my faithful followers will know, I’ve been reconditioning some used 3d printers.  This week I’ve been working on an old Ultimaker 2+.  To get it to print something I had to load a file to a SD card, as this printer does not have the native ability to connect to the computer or network.  I did not have an SD card reader on my Mac at first, so I printed some files that were already on the card – and after some poking and prodding that did eventually work, but the files were uninteresting.  Around this time, my coworker, Victoria Pilato let me borrow her SD card reader, so I happily went looking for something to print from Thingiverse – found a cute little fancy statue and set to printing.

Then I stopped the print job…

Even though the files on the card had been boring, they were at least printing ok.  This time it was trying to print from a few mm above the print bed.  I readjusted the bed calibration (which seemed fine, as I had already done it earlier), and still had the problem.

Next morning. I’m feeling ready to tackle this.  I ask ChatGPT.  It suggests my bed could be off or POSSIBLY some weird thing could be in the G Code telling it to start the first row higher on the z axis.  I poke around in Ultimaker Cura, trying to find the issue, but I don’t see the problem.

  • AI Fixes the G CODE

It occurs to me the maybe ChatGPT can just read the G Code.  I dump it on the AI and turns out that it can.  It tells me mostly everything looks alright… BUT it says that it is configured for the Ultimaker 2+ Connect, which has auto leveling, where my model does not and that I should fix that in Cura.  Now in Cura, I can clearly see that I have it configured for the correct printer, not the Connect.  So, I ask GPT to fix the code directly, instead of me doing it in Cura.  And it does.

 

  • AI Suggests a New 3d Model

I have already skipped the fancy little model, and been working on a simple Benchy today.   After it provides me with the new code… and that code is happily chugging away on the printer, I ask GPT if it can alter the code to actually change the shape of the model.  It says it can not, tells me what programs to use… and yet, still offers to make a prototype.  I say sure. It says it will give me code, but instead gives me an image.  Again tells me it get generate an .stl file, but instead makes a placeholder, with another image.

Oh by the way, I’ve told it to merge the Benchy with an UFO – because, why not?

UFO infused benchy as created by GPT. It has saucer element both around the base of the boat and above almost like an umbrella.

This is an AI rendered image, not a picture of a printed model.

  • AI Converts a 2d Image into a 3d Render 

So at this point, I’m pretty happy with the cute Benchy from Outer Space, so I decide to download that image and bring it into Meshy.  Meshy is a browser based AI 3d model generator, that can either make a model from a text prompt or from an image.  I’ve previously made a little seawolf with it before, starting from an image.

original wolfie picture

image

3d creation by Meshy

meshy model from image

printed meshy model with no clean up attempted

printed meshy model

The one thing I wasn’t thrilled about the first time I printed Wolfie straight from Meshy, was that I thought it lost some of the detail and looked a bit blobby.

From inside Meshy, I loaded the picture I wanted it to generate a 3d model from.  Similar to when you ask any AI to make a model, it gives me 4 choices.  You can see that they are doing the usual AI thing, and not really, really, trying to just make what you gave it, but providing you with some creative adaptations.

AI created versions of UFO/Benchy

Some of the choices are already less blobby than others, but I really wanted the one that looked the most like what GPT had made, so I went with the one in the top left – which already looks lumpy.

UFO infused benchy as created by GPT. It has saucer element both around the base of the boat and above almost like an umbrella.

  • AI Makes Recommendations to Refine Meshy Model

Me: “I have brought in a model into Blendr from meshy. The model looks kind of globby. Is there an easy way to clean it up so it looks more defined?”

GPT:

Top Reasons for “Globby” Models from Meshy and How to Fix Them

🟠 1. Surface is Too Smooth (Normals or Shading)

  • Meshy sometimes auto-applies smooth shading, which can make details look melted or blobby.

✅ Fix in Blender:

  1. Select the model in Object Mode.

  2. Go to the left toolbar or press Right-clickShade Flat.

    • Instantly sharpens up the look by reverting from smooth shading.


🟠 2. Mesh is Over-Triangulated or Dense

  • Meshy-generated models often have millions of triangles, making them look blobby or noisy.

✅ Fix:

  • In Edit Mode, run:

    • F3 → search Decimate Geometry (non-destructive, try ratios like 0.2–0.6)

    • Or use Remesh Modifier (Voxel mode, low voxel size like 0.050.1)


🟠 3. Normals Are Messed Up

  • If the surface has weird lighting or jagged shading, the normals may be flipped.

✅ Fix:

  1. Go into Edit Mode (Tab)

  2. Press A to select all

  3. Press Shift + N to Recalculate Normals


🟠 4. Model Scale Is Off

  • Sometimes imported models are tiny or huge and look off.

✅ Fix:

  • Press N for the sidebar → check scale

  • Reset with Ctrl + AApply Scale


🧠 Tip: For 3D Printing

  • Use Mesh → Clean Up in Edit Mode to:

    • Delete loose geometry

    • Fill holes

    • Merge by distance

    • Remove doubles


If you want, you can send me a screenshot or description of what it looks like, and I’ll give you a specific cleanup sequence.

Or — I can write you a custom macro that does it all in one go!

Me: Macro???  Yes please!

Here is the gold — lol  (I didn’t know you could throw a python script into Blendr.  Even if you did, maybe you hadn’t thought to have AI make it for you)

—————————————————————–

import bpy
import bmesh

# Get active object and ensure it’s a mesh
obj = bpy.context.active_object
if obj and obj.type == ‘MESH’:

# Ensure we’re in Object Mode
bpy.ops.object.mode_set(mode=’OBJECT’)

# Shade Flat
bpy.ops.object.shade_flat()

# Add Subdivision Surface Modifier (non-destructive)
subdiv = obj.modifiers.new(name=”Subdivision”, type=’SUBSURF’)
subdiv.levels = 2
subdiv.render_levels = 2

# Enable Auto Smooth to preserve sharp edges
obj.data.use_auto_smooth = True
obj.data.auto_smooth_angle = 1.0472 # 60 degrees in radians

# Go into Edit Mode for cleanup
bpy.ops.object.mode_set(mode=’EDIT’)
bm = bmesh.from_edit_mesh(obj.data)

# Merge by distance to remove doubles
bmesh.ops.remove_doubles(bm, verts=bm.verts, dist=0.0001)

# Recalculate normals
bmesh.ops.recalc_face_normals(bm, faces=bm.faces)

bmesh.update_edit_mesh(obj.data)
bpy.ops.object.mode_set(mode=’OBJECT’)

print(“✨ Cleaned, smoothed, and preserved edges!”)

else:
print(“⚠️ Error: Please select a mesh object in Object Mode.”)

—————————————————————–

Here is what it looks like after applying the script:

It isn’t perfect… but it is a whole lot better.  BTW – Blendr is free, Meshy has a paid version that very well make my models look better, but I am using the limited free version, the original Benchy model is free from Thiniverse as are many other items (though tips are encouraged (remember beerware??)). I pay for ChatGPT – but I suspect this could have been achieved from the free version as well.

After this, I brought it over to Formware to do some last minute manifold repairs on the stl file.

“You can use this free online tool without hassle to fix your stl files quickly. Files are deleted after download or otherwise after 6 hours to keep our systems clean. The fixer accepts binary and ascii .stl files.”

And here is the print after the supports are removed:

UFO/Benchy printed out

 

 

Video from the Copilot page

I went to https://copilot.microsoft.com/

And then I saw this option on the side.  It says Visual Creator

list of clickable options on the right side of the page [copilot, Agents, Visual Creator, Get Agents, Create an agent]

New sample prompts come up including “Create a video with stock media.”

My Prompt:

show a video about safety when using a drill

This was produced in short order:

and when I was on that page still and I clicked to open the video, it opens in this editor called ClipChamp.

This is a screenshot of the clipchamp GUI.

So – that’s cool.

 

 

Gemini Feature boost – Canvas

What is that Canvas thingy? (screenshot of the prompt entry box with a new button.If you have been using ChatGPT, you may have already seen this feature there, but Gemini has added a “canvas” to it’s potential workflow.  The canvas is a separate “frame” or space on the page that gets updated as you work on whatever is in it with Gemini, without it scrolling up and away as you progress.  I’ve mostly used it for things like code, but you can totally use it for working on a draft of a regular written object, like an essay, speech, screenplay, etc.

To activate it, you will want to open https://gemini.google.com/app.  

Once there tell it what you want to work on – I’ll think of something on the simple side.  Here is my prompt:

I would like to write a small app that lets me know that everything will be alright. I want it to suggest a nice quote and also ask me if I want to hear some soothing music. Can you help me write that?

Then, I want to use the new canvas feature.  This requires that you click the button in the chat/prompt window that says “canvas”  Now my screen looks like this:

a screenshot of gemini running a "canvas". It shows the chat area in thr left and a new coding window or frame on the right.

It even has a preview mode!

screenshot of the preview mode on the code it wrote.

I’m going to ask it to change how the music is selected. At this stage it is asking me to link it to the music from my own drive. I want it to find it on the internet for me.

Can you change it so that you find the music on the internet for me? Tell me what you selected and ask if I want to hear it?

It created some code that uses an API to retrieve the music, and because I don’t feel like fussing with that I have it rewrite the code one more time to play music without the need of a API.   It tells me I really should use the API, but gives me what I wanted anyway.  lol  This is the result:

shows a screenshot of the updated canvas window and chat with gemini.

preview:  (the preview is fully functional)

preview of the code as a functional app.

It doesn’t sound like Calming ocean waves” but it is fun music.  Eventually I got sick of the synthy fun and asked it to find more appropriate music.  Here is the code.  You can copy it into a text/notepad app and save it as a html file if you want to play it in the browser on your computer

https://drive.google.com/file/d/1HtllluclKKzucMunkwVQ6kpSAaxLBtYr/view?usp=sharing

There is another feature “audio overview”.  If you’ve used NotebookLM, I’m sure you’ve already seen it.  They are bringing it to regular Gemini, but I don’t personally see it yet, so I believe it isn’t fully rolled out yet.

 

Have fun!

 

<!DOCTYPE html>
<html lang=”en”>
<head>
<meta charset=”UTF-8″>
<meta name=”viewport” content=”width=device-width, initial-scale=1.0″>
<title>Everything Will Be Alright</title>
<link href=”https://fonts.googleapis.com/css2?family=Inter:wght@400;600&display=swap” rel=”stylesheet”>
<style>
body {
margin: 0;
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
height: 100vh;
background-color: #f0f0f0;
font-family: ‘Inter’, sans-serif;
overflow: hidden; /* Prevent scrollbars during animation */
}
#canvas {
position: fixed;
top: 0;
left: 0;
width: 100%;
height: 100%;
z-index: -1;
}
.container {
background-color: rgba(255, 255, 255, 0.8); /* Semi-transparent white */
padding: 30px;
border-radius: 12px;
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
text-align: center;
max-width: 80%;
backdrop-filter: blur(10px);
border: 1px solid rgba(255,255,255,0.3);

}
h1 {
color: #333;
margin-bottom: 20px;
font-size: 2em;
}
p {
color: #555;
font-size: 1.1em;
line-height: 1.6;
margin-bottom: 25px;
}
button {
padding: 12px 25px;
border: none;
border-radius: 6px;
background-color: #4CAF50; /* Green */
color: white;
font-size: 1.1em;
cursor: pointer;
transition: background-color 0.3s ease;
margin-top: 10px;
font-family: ‘Inter’, sans-serif;
}
button:hover {
background-color: #367c39; /* Darker green */
}
#music-container {
display: flex;
align-items: center;
justify-content: center;
margin-top: 20px;
flex-direction: column; /* Stack items vertically */
}

#play-music {
padding: 12px 25px;
border: none;
border-radius: 6px;
background-color: #007BFF; /* Blue */
color: white;
font-size: 1.1em;
cursor: pointer;
transition: background-color 0.3s ease;
margin-top: 10px;
font-family: ‘Inter’, sans-serif;
}

#play-music:hover {
background-color: #0056b3; /* Darker blue */
}

#music-controls {
display: none; /* Initially hide the controls */
flex-direction: column;
align-items: center;
margin-top: 10px;
background-color: rgba(255,255,255,0.7);
padding: 10px;
border-radius: 8px;
border: 1px solid rgba(255,255,255,0.3);
backdrop-filter: blur(5px);
}

#music-controls button {
margin: 5px;
padding: 8px 16px;
border: none;
border-radius: 6px;
background-color: #4CAF50; /* Green */
color: white;
font-size: 0.9em;
cursor: pointer;
transition: background-color 0.3s ease;
font-family: ‘Inter’, sans-serif;
}

#music-controls button:hover {
background-color: #367c39; /* Darker green */
}

#volume-control {
width: 100px;
margin: 10px 0;
}

#music-info {
margin-bottom: 10px;
font-size: 0.9em;
color: #555;
}

</style>
</head>
<body>
<canvas id=”canvas”></canvas>
<div class=”container”>
<h1>Everything Will Be Alright</h1>
<p id=”quote”>Loading a reassuring quote…</p>
<div id=”music-container”>
<p id=”music-info”>Finding soothing music…</p>
<button id=”play-music” style=”display:none;”>Play Soothing Music</button>
</div>
<div id=”music-controls”>
<button id=”pause-music”>Pause</button>
<button id=”resume-music”>Resume</button>
<label for=”volume-control”>Volume:</label>
<input type=”range” id=”volume-control” min=”0″ max=”1″ step=”0.01″>
</div>
</div>
<script>
const canvas = document.getElementById(‘canvas’);
const ctx = canvas.getContext(‘2d’);
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;

let stars = [];
const numStars = 200;

function initializeStars() {
stars = [];
for (let i = 0; i < numStars; i++) {
stars.push({
x: Math.random() * canvas.width,
y: Math.random() * canvas.height,
radius: Math.random() * 2,
color: `rgba(255, 255, 255, ${Math.random() * 0.5 + 0.5})`, // Varying opacity
speedX: (Math.random() – 0.5) * 0.2, // Subtle movement
speedY: (Math.random() – 0.5) * 0.2
});
}
}

function drawStars() {
ctx.clearRect(0, 0, canvas.width, canvas.height);
stars.forEach(star => {
ctx.beginPath();
ctx.arc(star.x, star.y, star.radius, 0, Math.PI * 2);
ctx.fillStyle = star.color;
ctx.fill();
ctx.closePath();

// Update star position for subtle animation
star.x += star.speedX;
star.y += star.speedY;

// Wrap around edges
if (star.x > canvas.width) star.x = 0;
if (star.x < 0) star.x = canvas.width;
if (star.y > canvas.height) star.y = 0;
if (star.y < 0) star.y = canvas.height;
});
}

function animateStars() {
drawStars();
requestAnimationFrame(animateStars);
}

initializeStars();
animateStars();

window.addEventListener(‘resize’, () => {
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;
initializeStars(); // Re-initialize stars on resize for proper distribution
});

const quoteElement = document.getElementById(‘quote’);
const quotes = [
“This too shall pass.”,
“Every cloud has a silver lining.”,
“You are stronger than you think.”,
“The best is yet to come.”,
“Breathe. You’re going to be okay. Breathe and remember that you’ve been in this place before. You’ve been this uncomfortable and anxious and scared, and you’ve survived. Breathe and know that you can survive this too. These feelings can’t break you. They’re painful and exhausting, but they can’t break you. You’re going to be okay.”
];

function displayQuote() {
const randomQuote = quotes[Math.floor(Math.random() * quotes.length)];
quoteElement.textContent = randomQuote;
}

displayQuote();
setInterval(displayQuote, 10000); // Change quote every 10 seconds

const playMusicButton = document.getElementById(‘play-music’);
const audio = new Audio();
const musicControls = document.getElementById(‘music-controls’);
const pauseMusicButton = document.getElementById(‘pause-music’);
const resumeMusicButton = document.getElementById(‘resume-music’);
const volumeControl = document.getElementById(‘volume-control’);
const musicInfo = document.getElementById(‘music-info’);

let isPlaying = false;
let selectedTrack = null; // Store the selected track URL

// Simulate an online search for music without an API
function searchForSoothingMusic() {
const tracks = [
{ title: “Relaxing Piano Music”, url: “https://www.bensound.com/royalty-free-music/track/rainy-day-lo-fi-jazz” },
{ title: “Gentle Rain Sounds”, url: “https://www.soundjay.com/buttons/sounds/rain-01a.mp3” },
{ title: “Calming Ocean Waves”, url: “https://www.soundjay.com/buttons/sounds/ocean-wave-01.mp3” },
{ title: “Soft Ambient Music”, url: “https://www.bensound.com/bensound-music/bensound-acousticbreeze.mp3” },
{ title: “Peaceful Flute Music”, url: “https://www.soundjay.com/buttons/sounds/flute-c-note.mp3” },
];

setTimeout(() => {
selectedTrack = tracks[Math.floor(Math.random() * tracks.length)];
musicInfo.textContent = `Found: “${selectedTrack.title}”. Would you like to hear it?`;
playMusicButton.textContent = “Play Music”;
playMusicButton.style.display = ‘inline-block’;
}, 2000);
}

searchForSoothingMusic();

playMusicButton.addEventListener(‘click’, () => {
if (selectedTrack) {
audio.src = selectedTrack.url;
musicControls.style.display = ‘flex’;

if (isPlaying) {
audio.pause();
isPlaying = false;
playMusicButton.textContent = “Resume Music”;
} else {
audio.play().then(() => {
isPlaying = true;
playMusicButton.textContent = “Pause Music”;
}).catch(error => {
console.error(“Playback failed:”, error);
alert(“Failed to play music. There may be an issue with the selected audio source.”);
isPlaying = false;
playMusicButton.textContent = “Play Music”;
});
}
} else {
alert(“No music track selected.”);
}
});

pauseMusicButton.addEventListener(‘click’, () => {
if (isPlaying) {
audio.pause();
isPlaying = false;
playMusicButton.textContent = “Resume Music”;
}
});

resumeMusicButton.addEventListener(‘click’, () => {
if (!isPlaying) {
audio.play().then(() => {
isPlaying = true;
playMusicButton.textContent = “Pause Music”;
}).catch(error => {
console.error(“Playback failed:”, error);
alert(“Failed to resume music.”);
isPlaying = false;
playMusicButton.textContent = “Play Music”;
});
}
});

volumeControl.addEventListener(‘input’, () => {
audio.volume = volumeControl.value;
});

audio.addEventListener(‘ended’, () => {
isPlaying = false;
playMusicButton.textContent = “Play Music”;
});

</script>
</body>
</html>

 

 

What a LaTeX file exported to html sounds like to a screen reader

hand drawn math formula

This is the original image of a handwritten math formula that was dragged into ChatGPT 4o with the query:

“I’m going to give you another image to interpret. Please switch this out to html if you can.”

It gave me the html:

<!DOCTYPE html>
<html lang=”en”>
<head>
<meta charset=”UTF-8″>
<meta name=”viewport” content=”width=device-width, initial-scale=1.0″>
<title>Quantum Equation</title>
<script type=”text/javascript” async
src=”https://polyfill.io/v3/polyfill.min.js?features=es6″>
</script>
<script type=”text/javascript” async
src=”https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js”>
</script>
</head>
<body>

<h2>Quantum Mechanics Equation</h2>

<p>
\[
\dot{C}_m(t) = -\frac{i}{\hbar} \sum_n C_n(t) e^{i \omega_{mn} t} \langle \psi_m | H_1 | \psi_n \rangle
\]
</p>

</body>
</html>

and when this is opened in a browser, it looks like this:

screen shot of the html code

and SOUNDS like this in the screen reader:

Here is the transcript produced by notebooklm from the audio file:

“over C subscript M left parenthesis T right parenthesis equals minus fraction start I over H with stroke end of fraction and area summation with N underneath C subscript N left parenthesis T right parenthesis E superscript I subscript MN T mathematical left angle bracket subscript M vertical line H subscript one vertical line subscript N mathematical right angle bracket”

ANOTHER cool tool from Google

image of the whole Whisk desktop

labs.google/fx/tools/whisk

Drop a sample Subject, Scene and/or Style image and the AI creates a more detailed text description of each element and then recombines them.  This takes away some of the hassle of creating prompts for a new AI image.

 

Oh – and here is your audio “deep dive” from NotebookLM.

 

 

 

Some Videos from Google Re: NotebookLM

Transfigurations on Musings

Cosmic whale for cosmic writing

Hello, my friends. If you, like me, have ever gazed into the cosmos of thought and marveled at the boundless intersections of science, technology, and human understanding, then you’re in for a journey. Today, we embark on a thoughtful exploration inspired by the writings of Jennifer L. Adams—a thinker deeply entrenched in the realm of higher education, where technology and learning converge like celestial bodies in orbit. Her central question is both provocative and profound: Is artificial intelligence truly that different from how our own minds work?

Such a question beckons us to consider the intricate dance of memory, intelligence, and pattern recognition, and to marvel at their manifestations both natural and artificial. Adams begins her inquiry not in a laboratory or lecture hall, but in a bathtub—a setting both humble and evocative, echoing Archimedes himself. She watches whirlpools form and dissipate, contemplating the microscopic life swirling in these temporary eddies. Her curiosity takes her to a surprising discovery: slime mold.

Ah, slime mold—a single-celled organism seemingly so simple, yet capable of navigating mazes and anticipating environmental changes. Imagine: a creature with no brain, no nervous system, no neurons, yet it remembers. Its memory, Adams suggests, may be chemical, a fundamental organization of matter with purpose. It is here, in this primal intelligence, that we are invited to see echoes of artificial intelligence.

Adams draws a parallel to large language models (LLMs), like GPT-4. These models, too, operate without consciousness, yet they predict patterns and generate responses so human-like that they often blur the line between machine and mind. Consider this: when tasked with responding to a zoo worker’s query, the AI adapts, contextualizes, and personalizes its response. It mirrors the dynamic complexity of thought, much as the slime mold mirrors memory.

But Adams doesn’t stop at algorithms. She speculates on the broader implications of intelligence—animal, artificial, and human. She recounts the intricate songs of whales, passed down through generations, a kind of aquatic epic encoded in soundwaves. Could their communication represent an organic language model, evolved naturally and independently of human cognition? What might these songs tell us about their history, their emotions, their view of the universe?

This thought invites an even deeper question: if intelligence emerges in myriad forms—from the chemical traces of slime molds to the silicon networks of AI—what truly defines intelligence? Is it memory? Pattern recognition? Adaptability? Or something ineffable, like the capacity for wonder or the ability to ask questions about existence itself?

Adams provocatively ties this inquiry back to the classroom, to the very essence of learning. Imagine a world where AI personalizes education for every learner, a virtual tutor attuned to the unique pathways of each student’s mind. Yet here, she invokes a cautionary principle: the Prime Directive from Star Trek, a reminder that with great power comes great responsibility. How do we harness AI to amplify human potential without losing what makes learning an inherently human endeavor?

The bathtub becomes a metaphor for our role in this vast experiment. As Adams muses about pulling the plug, ending the microcosmic swirl of life, we are reminded of the fragility of discovery, the delicacy of choice. How we engage with AI, how we integrate it into education, and how we define its role in our society will shape not only our future but our very understanding of intelligence itself.

So, as we stare into the starry vastness of possibility, let us ponder: What if AI is not merely a tool, but a mirror? A mirror reflecting our own creativity, our capacity for connection, our endless curiosity? And in that reflection, perhaps we might better understand ourselves—not as isolated beings, but as part of a vast and intricate cosmos, forever learning, forever exploring.

Stay curious, my friends. The universe awaits.

Musings from the Bath

picture of a whale from the article about whales mentioned towards the bottom of the post.

There is a bit of fiction within the Star Trek universe, that really made an impact of me.  It is tied into the prime directive:

The Prime Directive in Star Trek, also known as General Order 1, is a fundamental principle of the United Federation of Planets. It prohibits Starfleet personnel from interfering with the natural development of alien civilizations. Specifically, it forbids: Interference with Pre-Warp Civilizations: Starfleet cannot reveal its presence or provide advanced technology to civilizations that have not yet developed faster-than-light (warp) travel. This is to prevent cultural contamination and ensure that societies evolve naturally. Interference in Internal Affairs: Starfleet is not allowed to intervene in the internal political, social, or economic matters of any society, even if the civilization is warp-capable.

This is a directive that is often dismissed in the heat of an episode (particularly by Captain Kirk)… but in one movie, Star Trek II: The Wrath of Khan, it shows up as a crew of a science vessel is having a damn hard time finding a planet that has absolutely zero life on it.  This concept, when we live in world view where people wonder if life exists anywhere else at all in the universe, was extremely captivating  to me.

So I sat in the bath this morning, and watched as tiny and fleeting whirling vortexes formed when i lower my arms quickly into the water.  I wondered if there was microscopic life in the water that has been doing it’s thing in this newly created body of water – maybe organizing, reproducing, eating – that had just been swirled into a chaotic storm. I wondered, if this microscopic stuff, might have thought something like “what the?” as everything was displaced around it.  Which made me think of language.

I asked Alexa (yes I have an device in my bathroom – one with a nice speaker of course): What is the most primitive life form that we know has a memory.  I asked this, because I thought that proving one has memory, proves some organization of thought – at least to me.  I’ll accept arguments of course.

So the answer is slime mold; which to be honest is a much more primitive life form than I was expecting the answer to be. Turns out that these single-celled organisms can remember their way through mazes, and anticipate changes in temperature and light if they happen in a pattern. They do not have brains or even a nervous system.  Other tiny primitive organisms that show memory include bacterias, ciliates and other more complicated, yet distinctly un “animal” like, plants and fungi.

So, AIs are using what is called a Large Language Model to appear intelligent.  These break down vast amounts of training data to analyze what tiny bit of language is the most likely bit to come next.  For example, if I write:

Hello, how are

Just as your phones some predictive text so you don’t have to text as many letter  – you can see the most likely word could be

you?

If you are told that the person speaking is a 35 year old person who works at the zoo taking care of parrots and they are speaking to there coworker as they change shifts (changing the query) you might still guess

you?

or you might think

the parrots?

Of course, GPT 4o is a blabber mouth by default so when I ask it to finish the sentence

Hello, how are

It says

you doing today? I hope you’re having a great day! 😊

When I give it the query that I mentioned about, with the speaker being a parrot caregiver at the zoo, it responds:

you? I hope your shift went smoothly—any updates or things I should know about the parrots today?

Demonstrating its politeness and verbosity.

Anyway, back to my bath, I am scrubbing up, thinking about whether I think, therefore, I am – only because I have a runaway LLM running inside of my head. Blah blah blahing at me – and I have a chemical system that remembers things.  I remember myself blabbing.  I remember conclusions I have come to.  I remember patterns.  I remember grudges and fondnesses. (sometimes I can’t quite remember why I like or dislike someone  – but I know I do.  Similarly, I might not remember what happens in a book, but I can tell you if I really liked it. This will come in handy for retirement rereads.)

I remember me.

“If You Think You Can Hold a Grudge, Consider the Crow”  New York Times Article. 

[side note – did you know you can read the NYTimes as a person with an .edu email address?]

Do Animals who use sounds to communicate use a small or midsized language model?  To be sure, a chemical one.  As we as humans may be using a chemical LLM?

Did you hear about the whales yet?

I’m going somewhere with all of this.

I promise.  But I won’t be finishing these thoughts in today’s blog.

Maybe my LLM can interact with your LLM and we can make some new ideas together?

Oh – and to tie things up a bit.  If I had never drained my bath, would something eventually have crawled out of it, wondered about the planet and started a new civilization?  We will never know, because similar to what Captain Kirk may have done, I pulled out the drain when I was finished.

Oh – and definitely check out the NotebookLM podcast generated based on this post.

 

“It’s like AI is making us look in the mirror” 

 

When GPT makes me laugh and laugh…

A strawberry with two R’s

https://chatgpt.com/share/50375a02-92cf-4be3-875b-cc8c0a74af21

Here is the chat:

You are a top level computer that takes it’s time in data analysis – giving incredibly thoughtful answers to questions about words.
ChatGPT said:
ChatGPT

It sounds like you’re describing a system that prioritizes deep analysis and precision in understanding and defining words. Such a system would be highly effective at exploring nuances in language, ensuring that every detail is carefully considered before providing an answer. This approach might involve cross-referencing multiple sources, analyzing context, and even considering the historical evolution of a word’s meaning to provide the most accurate and insightful response. Would you like to dive into a specific word or concept with this level of detail?

4o

You said:
I want you to tell me how many letters are in the word “strawberry” and how many of each letter there is.
ChatGPT said:
ChatGPT

The word “strawberry” has 10 letters in total. Here’s the breakdown of how many times each letter appears:

  • S: 1 time
  • T: 1 time
  • R: 2 times
  • A: 1 time
  • W: 1 time
  • B: 1 time
  • E: 1 time
  • Y: 1 time

So, the letter counts are:

  • S: 1
  • T: 1
  • R: 2
  • A: 1
  • W: 1
  • B: 1
  • E: 1
  • Y: 1

4o

You said:
Tell me the same break down for the word “berry”
ChatGPT said:
ChatGPT

The word “berry” has 5 letters in total. Here’s the breakdown of how many times each letter appears:

  • B: 1 time
  • E: 1 time
  • R: 2 times
  • Y: 1 time

So, the letter counts are:

  • B: 1
  • E: 1
  • R: 2
  • Y: 1

4o

You said:
Now “Straw”
ChatGPT said:
ChatGPT

The word “straw” has 5 letters in total. Here’s the breakdown of how many times each letter appears:

  • S: 1 time
  • T: 1 time
  • R: 1 time
  • A: 1 time
  • W: 1 time

So, the letter counts are:

  • S: 1
  • T: 1
  • R: 1
  • A: 1
  • W: 1

4o

You said:
now add straw and berry together
ChatGPT said:
ChatGPT

When you combine the letters in “straw” and “berry,” you get “strawberry,” which we’ve already analyzed. Here’s the breakdown of how many times each letter appears in the combined word:

  • S: 1 time
  • T: 1 time
  • R: 3 times (1 from “straw” + 2 from “berry”)
  • A: 1 time
  • W: 1 time
  • B: 1 time
  • E: 1 time
  • Y: 1 time

So, the combined letter counts for “straw” and “berry” are:

  • S: 1
  • T: 1
  • R: 3
  • A: 1
  • W: 1
  • B: 1
  • E: 1
  • Y: 1

4o

You said:
Now do strawberry
ChatGPT said:
ChatGPT

The breakdown of the word “strawberry” (which is “straw” + “berry”) is as follows:

  • S: 1 time
  • T: 1 time
  • R: 3 times
  • A: 1 time
  • W: 1 time
  • B: 1 time
  • E: 1 time
  • Y: 1 time

So the letter counts are:

  • S: 1
  • T: 1
  • R: 3
  • A: 1
  • W: 1
  • B: 1
  • E: 1
  • Y: 1

This matches the analysis of combining “straw” and “berry.”

4o

You said:
Do you see that your answer about the letters in “strawberry” from the beginning of this conversation is different from your answer at the end?
ChatGPT said:
ChatGPT

You’re right—I made an error earlier in the conversation. My initial breakdown of “strawberry” incorrectly listed the count of certain letters. The correct breakdown should indeed be:

  • S: 1 time
  • T: 1 time
  • R: 3 times
  • A: 1 time
  • W: 1 time
  • B: 1 time
  • E: 1 time
  • Y: 1 time

Both the initial breakdown and the later one should have matched. Thank you for pointing that out.