Category Archives: AI

Create your own Coloring Pages with AI

Coloring pages may bring to mind childhood afternoons filled with crayons, construction paper, and cartoon animals. But far from being just a nostalgic pastime, coloring has evolved into a multi-faceted educational and wellness tool. From preschool classrooms to graduate-level STEM programs, coloring pages are proving their worth in surprising and impactful ways.


The Foundations: Coloring in Early Education

Coloring has long held a valuable place in early childhood education. For Pre-K through elementary students, the benefits are well-documented:

  • Motor Skills Development: Gripping crayons and staying within lines strengthens hand muscles and improves fine motor control, which is foundational for writing.

  • Hand-Eye Coordination: Coloring encourages visual and spatial reasoning as children learn to coordinate sight and hand movement.

  • Color Recognition & Creativity: It’s an early introduction to color theory, symmetry, and emotional expression.

These early coloring activities aren’t just fun—they lay groundwork for cognitive and physical development that supports academic readiness.

Generated image showing a kids coloring page with mythical creatures


Coloring for Wellness: The Adult Coloring Book Trend

Coloring isn’t just for kids anymore. Over the past decade, adult coloring books have surged in popularity, often found alongside mindfulness journals and stress relief guides.

  • Mental Health Support: Coloring is meditative. It offers a structured yet creative outlet that helps reduce anxiety and promotes focus.

  • Digital Detox: In an age of constant screen time, analog activities like coloring offer a necessary and calming break.

  • Creativity for All: Adult coloring books span themes like architecture, nature, literature, and fantasy—inviting users to re-engage with play and imagination.

Generated image that says Mental Health Matters surrounding with fancy floral patterns to color in.

 


Coloring in Higher Education: A Study Tool with Serious Potential

What may surprise many is how coloring pages are being used as study tools in secondary and post-secondary education. When applied strategically, coloring supports memorization, spatial reasoning, and content engagement across disciplines:

Science and Medicine

  • Anatomy Diagrams: Coloring bones, muscles, and internal organs reinforces spatial understanding of the human body.

  • Biology Processes: Cellular structures, DNA replication cycles, and taxonomies become more tangible when color-coded.

  • Chemistry: Molecular diagrams, periodic tables, and lab safety signs are easier to recall when visually and kinesthetically processed.

Neuroscience and Psychology

  • Brain Mapping: Coloring different lobes and functional areas supports recall and understanding of cognitive pathways.

  • Behavioral Models: Reinforce theories like Maslow’s hierarchy or Pavlovian conditioning using symbolic visuals.

 

behaviour coloring example

Geography and History

  • Map Coloring: Students studying political boundaries, biomes, or historical trade routes engage more deeply by physically interacting with the content.

  • Historical Timelines: Color-coded eras or civilizations help clarify chronology and cultural evolution.

Engineering and Architecture

  • Mechanical Diagrams: For fields like mechanical, civil, or electrical engineering, coloring gears, circuits, or CAD printouts helps students see system relationships.

  • Blueprints and Drafts: Color can be used to identify materials, stress points, or workflow systems.

Fine Arts and Humanities

  • Masterpiece Studies: Recoloring works by Van Gogh or Kandinsky helps students internalize techniques, palettes, and emotional tones.

  • Literature & Language: Use thematic coloring to analyze character development, plot structures, or vocabulary clusters.

coloring example of protein folding

 

 


Why Coloring Works as a Cognitive Tool

  • Multisensory Learning: Coloring combines visual, tactile, and kinesthetic inputs—an ideal blend for many learners.

  • Memory Encoding: The act of choosing and applying color helps encode information more deeply than passive reading.

  • Pattern Recognition: Regularly coloring systems or diagrams builds a strong visual memory for complex patterns.


By using AI to generate custom coloring pages, we can now move far beyond cartoon animals and into a world of personalized learning, interactive study tools, and mindful relaxation—all tailored to age, subject, and interest.


Here is a step by step example of using Chat GPT for making your own coloring pages:

Chat GPT opening prompt, "What can I help with?" showing the icon to upload a file.

Click the + sign in the prompt text entry window.

expanded mouse hover over + view

Click Add photos and files

Image of the attached photo with prompt "please make this into a coloring page for me"screenshot showing the photo with the returned coloring page.

Click share (the arrow going up)

screenshot showing share options Copy Link and Download

(You can also drag photos/illustration to the prompt window and drag them back out to your desktop when they are finished being created if that works for you.)

Combining 3D Printing and AI

As my faithful followers will know, I’ve been reconditioning some used 3d printers.  This week I’ve been working on an old Ultimaker 2+.  To get it to print something I had to load a file to a SD card, as this printer does not have the native ability to connect to the computer or network.  I did not have an SD card reader on my Mac at first, so I printed some files that were already on the card – and after some poking and prodding that did eventually work, but the files were uninteresting.  Around this time, my coworker, Victoria Pilato let me borrow her SD card reader, so I happily went looking for something to print from Thingiverse – found a cute little fancy statue and set to printing.

Then I stopped the print job…

Even though the files on the card had been boring, they were at least printing ok.  This time it was trying to print from a few mm above the print bed.  I readjusted the bed calibration (which seemed fine, as I had already done it earlier), and still had the problem.

Next morning. I’m feeling ready to tackle this.  I ask ChatGPT.  It suggests my bed could be off or POSSIBLY some weird thing could be in the G Code telling it to start the first row higher on the z axis.  I poke around in Ultimaker Cura, trying to find the issue, but I don’t see the problem.

  • AI Fixes the G CODE

It occurs to me the maybe ChatGPT can just read the G Code.  I dump it on the AI and turns out that it can.  It tells me mostly everything looks alright… BUT it says that it is configured for the Ultimaker 2+ Connect, which has auto leveling, where my model does not and that I should fix that in Cura.  Now in Cura, I can clearly see that I have it configured for the correct printer, not the Connect.  So, I ask GPT to fix the code directly, instead of me doing it in Cura.  And it does.

 

  • AI Suggests a New 3d Model

I have already skipped the fancy little model, and been working on a simple Benchy today.   After it provides me with the new code… and that code is happily chugging away on the printer, I ask GPT if it can alter the code to actually change the shape of the model.  It says it can not, tells me what programs to use… and yet, still offers to make a prototype.  I say sure. It says it will give me code, but instead gives me an image.  Again tells me it get generate an .stl file, but instead makes a placeholder, with another image.

Oh by the way, I’ve told it to merge the Benchy with an UFO – because, why not?

UFO infused benchy as created by GPT. It has saucer element both around the base of the boat and above almost like an umbrella.

This is an AI rendered image, not a picture of a printed model.

  • AI Converts a 2d Image into a 3d Render 

So at this point, I’m pretty happy with the cute Benchy from Outer Space, so I decide to download that image and bring it into Meshy.  Meshy is a browser based AI 3d model generator, that can either make a model from a text prompt or from an image.  I’ve previously made a little seawolf with it before, starting from an image.

original wolfie picture

image

3d creation by Meshy

meshy model from image

printed meshy model with no clean up attempted

printed meshy model

The one thing I wasn’t thrilled about the first time I printed Wolfie straight from Meshy, was that I thought it lost some of the detail and looked a bit blobby.

From inside Meshy, I loaded the picture I wanted it to generate a 3d model from.  Similar to when you ask any AI to make a model, it gives me 4 choices.  You can see that they are doing the usual AI thing, and not really, really, trying to just make what you gave it, but providing you with some creative adaptations.

AI created versions of UFO/Benchy

Some of the choices are already less blobby than others, but I really wanted the one that looked the most like what GPT had made, so I went with the one in the top left – which already looks lumpy.

UFO infused benchy as created by GPT. It has saucer element both around the base of the boat and above almost like an umbrella.

  • AI Makes Recommendations to Refine Meshy Model

Me: “I have brought in a model into Blendr from meshy. The model looks kind of globby. Is there an easy way to clean it up so it looks more defined?”

GPT:

Top Reasons for “Globby” Models from Meshy and How to Fix Them

🟠 1. Surface is Too Smooth (Normals or Shading)

  • Meshy sometimes auto-applies smooth shading, which can make details look melted or blobby.

✅ Fix in Blender:

  1. Select the model in Object Mode.

  2. Go to the left toolbar or press Right-clickShade Flat.

    • Instantly sharpens up the look by reverting from smooth shading.


🟠 2. Mesh is Over-Triangulated or Dense

  • Meshy-generated models often have millions of triangles, making them look blobby or noisy.

✅ Fix:

  • In Edit Mode, run:

    • F3 → search Decimate Geometry (non-destructive, try ratios like 0.2–0.6)

    • Or use Remesh Modifier (Voxel mode, low voxel size like 0.050.1)


🟠 3. Normals Are Messed Up

  • If the surface has weird lighting or jagged shading, the normals may be flipped.

✅ Fix:

  1. Go into Edit Mode (Tab)

  2. Press A to select all

  3. Press Shift + N to Recalculate Normals


🟠 4. Model Scale Is Off

  • Sometimes imported models are tiny or huge and look off.

✅ Fix:

  • Press N for the sidebar → check scale

  • Reset with Ctrl + AApply Scale


🧠 Tip: For 3D Printing

  • Use Mesh → Clean Up in Edit Mode to:

    • Delete loose geometry

    • Fill holes

    • Merge by distance

    • Remove doubles


If you want, you can send me a screenshot or description of what it looks like, and I’ll give you a specific cleanup sequence.

Or — I can write you a custom macro that does it all in one go!

Me: Macro???  Yes please!

Here is the gold — lol  (I didn’t know you could throw a python script into Blendr.  Even if you did, maybe you hadn’t thought to have AI make it for you)

—————————————————————–

import bpy
import bmesh

# Get active object and ensure it’s a mesh
obj = bpy.context.active_object
if obj and obj.type == ‘MESH’:

# Ensure we’re in Object Mode
bpy.ops.object.mode_set(mode=’OBJECT’)

# Shade Flat
bpy.ops.object.shade_flat()

# Add Subdivision Surface Modifier (non-destructive)
subdiv = obj.modifiers.new(name=”Subdivision”, type=’SUBSURF’)
subdiv.levels = 2
subdiv.render_levels = 2

# Enable Auto Smooth to preserve sharp edges
obj.data.use_auto_smooth = True
obj.data.auto_smooth_angle = 1.0472 # 60 degrees in radians

# Go into Edit Mode for cleanup
bpy.ops.object.mode_set(mode=’EDIT’)
bm = bmesh.from_edit_mesh(obj.data)

# Merge by distance to remove doubles
bmesh.ops.remove_doubles(bm, verts=bm.verts, dist=0.0001)

# Recalculate normals
bmesh.ops.recalc_face_normals(bm, faces=bm.faces)

bmesh.update_edit_mesh(obj.data)
bpy.ops.object.mode_set(mode=’OBJECT’)

print(“✨ Cleaned, smoothed, and preserved edges!”)

else:
print(“⚠️ Error: Please select a mesh object in Object Mode.”)

—————————————————————–

Here is what it looks like after applying the script:

It isn’t perfect… but it is a whole lot better.  BTW – Blendr is free, Meshy has a paid version that very well make my models look better, but I am using the limited free version, the original Benchy model is free from Thiniverse as are many other items (though tips are encouraged (remember beerware??)). I pay for ChatGPT – but I suspect this could have been achieved from the free version as well.

After this, I brought it over to Formware to do some last minute manifold repairs on the stl file.

“You can use this free online tool without hassle to fix your stl files quickly. Files are deleted after download or otherwise after 6 hours to keep our systems clean. The fixer accepts binary and ascii .stl files.”

And here is the print after the supports are removed:

UFO/Benchy printed out

 

 

Video from the Copilot page

I went to https://copilot.microsoft.com/

And then I saw this option on the side.  It says Visual Creator

list of clickable options on the right side of the page [copilot, Agents, Visual Creator, Get Agents, Create an agent]

New sample prompts come up including “Create a video with stock media.”

My Prompt:

show a video about safety when using a drill

This was produced in short order:

and when I was on that page still and I clicked to open the video, it opens in this editor called ClipChamp.

This is a screenshot of the clipchamp GUI.

So – that’s cool.

 

 

What a LaTeX file exported to html sounds like to a screen reader

hand drawn math formula

This is the original image of a handwritten math formula that was dragged into ChatGPT 4o with the query:

“I’m going to give you another image to interpret. Please switch this out to html if you can.”

It gave me the html:

<!DOCTYPE html>
<html lang=”en”>
<head>
<meta charset=”UTF-8″>
<meta name=”viewport” content=”width=device-width, initial-scale=1.0″>
<title>Quantum Equation</title>
<script type=”text/javascript” async
src=”https://polyfill.io/v3/polyfill.min.js?features=es6″>
</script>
<script type=”text/javascript” async
src=”https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js”>
</script>
</head>
<body>

<h2>Quantum Mechanics Equation</h2>

<p>
\[
\dot{C}_m(t) = -\frac{i}{\hbar} \sum_n C_n(t) e^{i \omega_{mn} t} \langle \psi_m | H_1 | \psi_n \rangle
\]
</p>

</body>
</html>

and when this is opened in a browser, it looks like this:

screen shot of the html code

and SOUNDS like this in the screen reader:

Here is the transcript produced by notebooklm from the audio file:

“over C subscript M left parenthesis T right parenthesis equals minus fraction start I over H with stroke end of fraction and area summation with N underneath C subscript N left parenthesis T right parenthesis E superscript I subscript MN T mathematical left angle bracket subscript M vertical line H subscript one vertical line subscript N mathematical right angle bracket”

Sora: A First Shot

If you pay for ChatGPT, you now have access  to Sora, their video creation AI application.  This is my first attempt…   It’s a bird, it’s a plane, it’s super goat!

 

Prompt: a herd of mountain goats can be seen charging up the side of a rocky hill. One of the ones in the front is carrying a ran banner. it is very dramatic and the camera zooms in as they get to the top of the hill.

ANOTHER cool tool from Google

image of the whole Whisk desktop

labs.google/fx/tools/whisk

Drop a sample Subject, Scene and/or Style image and the AI creates a more detailed text description of each element and then recombines them.  This takes away some of the hassle of creating prompts for a new AI image.

 

Oh – and here is your audio “deep dive” from NotebookLM.

 

 

 

Google: Project Astra

Webinar: NotebookLM: Intro for Stony Brook University

NotebookLM

Stony Brook University has now turned on NotebookLM for your campus Google account!  Go have fun!

 

 

Transfigurations on Musings

Cosmic whale for cosmic writing

Hello, my friends. If you, like me, have ever gazed into the cosmos of thought and marveled at the boundless intersections of science, technology, and human understanding, then you’re in for a journey. Today, we embark on a thoughtful exploration inspired by the writings of Jennifer L. Adams—a thinker deeply entrenched in the realm of higher education, where technology and learning converge like celestial bodies in orbit. Her central question is both provocative and profound: Is artificial intelligence truly that different from how our own minds work?

Such a question beckons us to consider the intricate dance of memory, intelligence, and pattern recognition, and to marvel at their manifestations both natural and artificial. Adams begins her inquiry not in a laboratory or lecture hall, but in a bathtub—a setting both humble and evocative, echoing Archimedes himself. She watches whirlpools form and dissipate, contemplating the microscopic life swirling in these temporary eddies. Her curiosity takes her to a surprising discovery: slime mold.

Ah, slime mold—a single-celled organism seemingly so simple, yet capable of navigating mazes and anticipating environmental changes. Imagine: a creature with no brain, no nervous system, no neurons, yet it remembers. Its memory, Adams suggests, may be chemical, a fundamental organization of matter with purpose. It is here, in this primal intelligence, that we are invited to see echoes of artificial intelligence.

Adams draws a parallel to large language models (LLMs), like GPT-4. These models, too, operate without consciousness, yet they predict patterns and generate responses so human-like that they often blur the line between machine and mind. Consider this: when tasked with responding to a zoo worker’s query, the AI adapts, contextualizes, and personalizes its response. It mirrors the dynamic complexity of thought, much as the slime mold mirrors memory.

But Adams doesn’t stop at algorithms. She speculates on the broader implications of intelligence—animal, artificial, and human. She recounts the intricate songs of whales, passed down through generations, a kind of aquatic epic encoded in soundwaves. Could their communication represent an organic language model, evolved naturally and independently of human cognition? What might these songs tell us about their history, their emotions, their view of the universe?

This thought invites an even deeper question: if intelligence emerges in myriad forms—from the chemical traces of slime molds to the silicon networks of AI—what truly defines intelligence? Is it memory? Pattern recognition? Adaptability? Or something ineffable, like the capacity for wonder or the ability to ask questions about existence itself?

Adams provocatively ties this inquiry back to the classroom, to the very essence of learning. Imagine a world where AI personalizes education for every learner, a virtual tutor attuned to the unique pathways of each student’s mind. Yet here, she invokes a cautionary principle: the Prime Directive from Star Trek, a reminder that with great power comes great responsibility. How do we harness AI to amplify human potential without losing what makes learning an inherently human endeavor?

The bathtub becomes a metaphor for our role in this vast experiment. As Adams muses about pulling the plug, ending the microcosmic swirl of life, we are reminded of the fragility of discovery, the delicacy of choice. How we engage with AI, how we integrate it into education, and how we define its role in our society will shape not only our future but our very understanding of intelligence itself.

So, as we stare into the starry vastness of possibility, let us ponder: What if AI is not merely a tool, but a mirror? A mirror reflecting our own creativity, our capacity for connection, our endless curiosity? And in that reflection, perhaps we might better understand ourselves—not as isolated beings, but as part of a vast and intricate cosmos, forever learning, forever exploring.

Stay curious, my friends. The universe awaits.