Embedding Video into a webpage or brightspace… from Google Drive

So, as is normal, there are different ways to tackle embedding video using systems here at SBU. I’m just going to look at google drive embeds.  When you embed a video, it plays right there in the page, not something you have to click on which then opens up a new page or downloads the video to view it.  Like this:

(BTW – this video I am using in this demonstration was made via copilot as I outlined in the previous blog post.)

So how did I do that?  One usually won’t just drop a video into this blog site, as it will likely be too big.  So the first thing I did after making the video, is drop it into my google drive. Make sure you share it to anyone that has the link or anyone at SBU as appropriate.

The thing is, the embed code from google drive is a wee bit hidden.  You have to click on the three dots next to the file name, open it in a new tab, then click the three dots above the player area and then at the very bottom of the menu you can find the embed code.  You will need to highlight and copy the code out from there.

Now that you have the embed code, what next?  Well, here in the blog I switch from a Visual View to a Text view, and that shows me the html for the page and I just copy it in.  It might look a little scary, but right after you paste, you can switch right back to the Visual Mode and safety.

In Brightspace, you go to the area you want to have the video,  find this icon:

the "insert stuff" icon from the wysiwyg menu in brightspace.

This is the “insert stuff” button which can be found in just about any text entry window where you can format the text.

List of the stuff you can insert. Embed Code is at the bottom.

Enter Embed Code is at the bottom

this shows the interface where you paste the embed code.

Paste in your code and hit next.  It gives me a warning that you are getting media from another site.  Click Allow. Click Insert.  I got that warning again, and I told it to allow it again.  And now it looks like this:

screenshot showing the video embedded into a brightspace page.

Google Drive auto captions the videos that you put in the drive. (You should still check and edit them.)

You do not get all the same engagement tools and analytics that you get in echo360  embedding, but I felt that this was still worth sharing.

 

 

 

 

 

Video from the Copilot page

I went to https://copilot.microsoft.com/

And then I saw this option on the side.  It says Visual Creator

list of clickable options on the right side of the page [copilot, Agents, Visual Creator, Get Agents, Create an agent]

New sample prompts come up including “Create a video with stock media.”

My Prompt:

show a video about safety when using a drill

This was produced in short order:

and when I was on that page still and I clicked to open the video, it opens in this editor called ClipChamp.

This is a screenshot of the clipchamp GUI.

So – that’s cool.

 

 

Gemini Feature boost – Canvas

What is that Canvas thingy? (screenshot of the prompt entry box with a new button.If you have been using ChatGPT, you may have already seen this feature there, but Gemini has added a “canvas” to it’s potential workflow.  The canvas is a separate “frame” or space on the page that gets updated as you work on whatever is in it with Gemini, without it scrolling up and away as you progress.  I’ve mostly used it for things like code, but you can totally use it for working on a draft of a regular written object, like an essay, speech, screenplay, etc.

To activate it, you will want to open https://gemini.google.com/app.  

Once there tell it what you want to work on – I’ll think of something on the simple side.  Here is my prompt:

I would like to write a small app that lets me know that everything will be alright. I want it to suggest a nice quote and also ask me if I want to hear some soothing music. Can you help me write that?

Then, I want to use the new canvas feature.  This requires that you click the button in the chat/prompt window that says “canvas”  Now my screen looks like this:

a screenshot of gemini running a "canvas". It shows the chat area in thr left and a new coding window or frame on the right.

It even has a preview mode!

screenshot of the preview mode on the code it wrote.

I’m going to ask it to change how the music is selected. At this stage it is asking me to link it to the music from my own drive. I want it to find it on the internet for me.

Can you change it so that you find the music on the internet for me? Tell me what you selected and ask if I want to hear it?

It created some code that uses an API to retrieve the music, and because I don’t feel like fussing with that I have it rewrite the code one more time to play music without the need of a API.   It tells me I really should use the API, but gives me what I wanted anyway.  lol  This is the result:

shows a screenshot of the updated canvas window and chat with gemini.

preview:  (the preview is fully functional)

preview of the code as a functional app.

It doesn’t sound like Calming ocean waves” but it is fun music.  Eventually I got sick of the synthy fun and asked it to find more appropriate music.  Here is the code.  You can copy it into a text/notepad app and save it as a html file if you want to play it in the browser on your computer

https://drive.google.com/file/d/1HtllluclKKzucMunkwVQ6kpSAaxLBtYr/view?usp=sharing

There is another feature “audio overview”.  If you’ve used NotebookLM, I’m sure you’ve already seen it.  They are bringing it to regular Gemini, but I don’t personally see it yet, so I believe it isn’t fully rolled out yet.

 

Have fun!

 

<!DOCTYPE html>
<html lang=”en”>
<head>
<meta charset=”UTF-8″>
<meta name=”viewport” content=”width=device-width, initial-scale=1.0″>
<title>Everything Will Be Alright</title>
<link href=”https://fonts.googleapis.com/css2?family=Inter:wght@400;600&display=swap” rel=”stylesheet”>
<style>
body {
margin: 0;
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
height: 100vh;
background-color: #f0f0f0;
font-family: ‘Inter’, sans-serif;
overflow: hidden; /* Prevent scrollbars during animation */
}
#canvas {
position: fixed;
top: 0;
left: 0;
width: 100%;
height: 100%;
z-index: -1;
}
.container {
background-color: rgba(255, 255, 255, 0.8); /* Semi-transparent white */
padding: 30px;
border-radius: 12px;
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
text-align: center;
max-width: 80%;
backdrop-filter: blur(10px);
border: 1px solid rgba(255,255,255,0.3);

}
h1 {
color: #333;
margin-bottom: 20px;
font-size: 2em;
}
p {
color: #555;
font-size: 1.1em;
line-height: 1.6;
margin-bottom: 25px;
}
button {
padding: 12px 25px;
border: none;
border-radius: 6px;
background-color: #4CAF50; /* Green */
color: white;
font-size: 1.1em;
cursor: pointer;
transition: background-color 0.3s ease;
margin-top: 10px;
font-family: ‘Inter’, sans-serif;
}
button:hover {
background-color: #367c39; /* Darker green */
}
#music-container {
display: flex;
align-items: center;
justify-content: center;
margin-top: 20px;
flex-direction: column; /* Stack items vertically */
}

#play-music {
padding: 12px 25px;
border: none;
border-radius: 6px;
background-color: #007BFF; /* Blue */
color: white;
font-size: 1.1em;
cursor: pointer;
transition: background-color 0.3s ease;
margin-top: 10px;
font-family: ‘Inter’, sans-serif;
}

#play-music:hover {
background-color: #0056b3; /* Darker blue */
}

#music-controls {
display: none; /* Initially hide the controls */
flex-direction: column;
align-items: center;
margin-top: 10px;
background-color: rgba(255,255,255,0.7);
padding: 10px;
border-radius: 8px;
border: 1px solid rgba(255,255,255,0.3);
backdrop-filter: blur(5px);
}

#music-controls button {
margin: 5px;
padding: 8px 16px;
border: none;
border-radius: 6px;
background-color: #4CAF50; /* Green */
color: white;
font-size: 0.9em;
cursor: pointer;
transition: background-color 0.3s ease;
font-family: ‘Inter’, sans-serif;
}

#music-controls button:hover {
background-color: #367c39; /* Darker green */
}

#volume-control {
width: 100px;
margin: 10px 0;
}

#music-info {
margin-bottom: 10px;
font-size: 0.9em;
color: #555;
}

</style>
</head>
<body>
<canvas id=”canvas”></canvas>
<div class=”container”>
<h1>Everything Will Be Alright</h1>
<p id=”quote”>Loading a reassuring quote…</p>
<div id=”music-container”>
<p id=”music-info”>Finding soothing music…</p>
<button id=”play-music” style=”display:none;”>Play Soothing Music</button>
</div>
<div id=”music-controls”>
<button id=”pause-music”>Pause</button>
<button id=”resume-music”>Resume</button>
<label for=”volume-control”>Volume:</label>
<input type=”range” id=”volume-control” min=”0″ max=”1″ step=”0.01″>
</div>
</div>
<script>
const canvas = document.getElementById(‘canvas’);
const ctx = canvas.getContext(‘2d’);
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;

let stars = [];
const numStars = 200;

function initializeStars() {
stars = [];
for (let i = 0; i < numStars; i++) {
stars.push({
x: Math.random() * canvas.width,
y: Math.random() * canvas.height,
radius: Math.random() * 2,
color: `rgba(255, 255, 255, ${Math.random() * 0.5 + 0.5})`, // Varying opacity
speedX: (Math.random() – 0.5) * 0.2, // Subtle movement
speedY: (Math.random() – 0.5) * 0.2
});
}
}

function drawStars() {
ctx.clearRect(0, 0, canvas.width, canvas.height);
stars.forEach(star => {
ctx.beginPath();
ctx.arc(star.x, star.y, star.radius, 0, Math.PI * 2);
ctx.fillStyle = star.color;
ctx.fill();
ctx.closePath();

// Update star position for subtle animation
star.x += star.speedX;
star.y += star.speedY;

// Wrap around edges
if (star.x > canvas.width) star.x = 0;
if (star.x < 0) star.x = canvas.width;
if (star.y > canvas.height) star.y = 0;
if (star.y < 0) star.y = canvas.height;
});
}

function animateStars() {
drawStars();
requestAnimationFrame(animateStars);
}

initializeStars();
animateStars();

window.addEventListener(‘resize’, () => {
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;
initializeStars(); // Re-initialize stars on resize for proper distribution
});

const quoteElement = document.getElementById(‘quote’);
const quotes = [
“This too shall pass.”,
“Every cloud has a silver lining.”,
“You are stronger than you think.”,
“The best is yet to come.”,
“Breathe. You’re going to be okay. Breathe and remember that you’ve been in this place before. You’ve been this uncomfortable and anxious and scared, and you’ve survived. Breathe and know that you can survive this too. These feelings can’t break you. They’re painful and exhausting, but they can’t break you. You’re going to be okay.”
];

function displayQuote() {
const randomQuote = quotes[Math.floor(Math.random() * quotes.length)];
quoteElement.textContent = randomQuote;
}

displayQuote();
setInterval(displayQuote, 10000); // Change quote every 10 seconds

const playMusicButton = document.getElementById(‘play-music’);
const audio = new Audio();
const musicControls = document.getElementById(‘music-controls’);
const pauseMusicButton = document.getElementById(‘pause-music’);
const resumeMusicButton = document.getElementById(‘resume-music’);
const volumeControl = document.getElementById(‘volume-control’);
const musicInfo = document.getElementById(‘music-info’);

let isPlaying = false;
let selectedTrack = null; // Store the selected track URL

// Simulate an online search for music without an API
function searchForSoothingMusic() {
const tracks = [
{ title: “Relaxing Piano Music”, url: “https://www.bensound.com/royalty-free-music/track/rainy-day-lo-fi-jazz” },
{ title: “Gentle Rain Sounds”, url: “https://www.soundjay.com/buttons/sounds/rain-01a.mp3” },
{ title: “Calming Ocean Waves”, url: “https://www.soundjay.com/buttons/sounds/ocean-wave-01.mp3” },
{ title: “Soft Ambient Music”, url: “https://www.bensound.com/bensound-music/bensound-acousticbreeze.mp3” },
{ title: “Peaceful Flute Music”, url: “https://www.soundjay.com/buttons/sounds/flute-c-note.mp3” },
];

setTimeout(() => {
selectedTrack = tracks[Math.floor(Math.random() * tracks.length)];
musicInfo.textContent = `Found: “${selectedTrack.title}”. Would you like to hear it?`;
playMusicButton.textContent = “Play Music”;
playMusicButton.style.display = ‘inline-block’;
}, 2000);
}

searchForSoothingMusic();

playMusicButton.addEventListener(‘click’, () => {
if (selectedTrack) {
audio.src = selectedTrack.url;
musicControls.style.display = ‘flex’;

if (isPlaying) {
audio.pause();
isPlaying = false;
playMusicButton.textContent = “Resume Music”;
} else {
audio.play().then(() => {
isPlaying = true;
playMusicButton.textContent = “Pause Music”;
}).catch(error => {
console.error(“Playback failed:”, error);
alert(“Failed to play music. There may be an issue with the selected audio source.”);
isPlaying = false;
playMusicButton.textContent = “Play Music”;
});
}
} else {
alert(“No music track selected.”);
}
});

pauseMusicButton.addEventListener(‘click’, () => {
if (isPlaying) {
audio.pause();
isPlaying = false;
playMusicButton.textContent = “Resume Music”;
}
});

resumeMusicButton.addEventListener(‘click’, () => {
if (!isPlaying) {
audio.play().then(() => {
isPlaying = true;
playMusicButton.textContent = “Pause Music”;
}).catch(error => {
console.error(“Playback failed:”, error);
alert(“Failed to resume music.”);
isPlaying = false;
playMusicButton.textContent = “Play Music”;
});
}
});

volumeControl.addEventListener(‘input’, () => {
audio.volume = volumeControl.value;
});

audio.addEventListener(‘ended’, () => {
isPlaying = false;
playMusicButton.textContent = “Play Music”;
});

</script>
</body>
</html>

 

 

InDesign Scripts (Including some that work for accessibility needs.)

a screenshot showing the menu navigation, Window -> Utilities -> Scripts from inside indesign.

Gilbert Consulting offers custom Adobe InDesign and Illustrator scripts to solve workflow issues. They provide 38 free scripts designed to automate tasks and enhance functionality within these programs. These scripts cover a range of functions, from adding hyperlinks and page buttons to analyzing documents and exporting files in various formats. Many scripts address specific formatting needs like table styles, accessibility checks, and image conversions. The scripts are frequently updated, and several are accompanied by blog posts or videos offering further instruction and context.

https://gilbertconsulting.com/scripts

I discovered these while taking the Advanced Accessible PDFs course on Linkedin by Chad Chelius.

Enjoy!

(more here – including a color contrast checker: https://accessibilityscripts.com/store/product-category/indesign/)

 

What a LaTeX file exported to html sounds like to a screen reader

hand drawn math formula

This is the original image of a handwritten math formula that was dragged into ChatGPT 4o with the query:

“I’m going to give you another image to interpret. Please switch this out to html if you can.”

It gave me the html:

<!DOCTYPE html>
<html lang=”en”>
<head>
<meta charset=”UTF-8″>
<meta name=”viewport” content=”width=device-width, initial-scale=1.0″>
<title>Quantum Equation</title>
<script type=”text/javascript” async
src=”https://polyfill.io/v3/polyfill.min.js?features=es6″>
</script>
<script type=”text/javascript” async
src=”https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js”>
</script>
</head>
<body>

<h2>Quantum Mechanics Equation</h2>

<p>
\[
\dot{C}_m(t) = -\frac{i}{\hbar} \sum_n C_n(t) e^{i \omega_{mn} t} \langle \psi_m | H_1 | \psi_n \rangle
\]
</p>

</body>
</html>

and when this is opened in a browser, it looks like this:

screen shot of the html code

and SOUNDS like this in the screen reader:

Here is the transcript produced by notebooklm from the audio file:

“over C subscript M left parenthesis T right parenthesis equals minus fraction start I over H with stroke end of fraction and area summation with N underneath C subscript N left parenthesis T right parenthesis E superscript I subscript MN T mathematical left angle bracket subscript M vertical line H subscript one vertical line subscript N mathematical right angle bracket”

LaTeX and Accessibility

This is an example of what LaTeX can look like:

a sample of latex

When I drop it (the image, not the LaTeX code) onto ChatGPT 4o, I get the following text:

“This image contains mathematical equations written in LaTeX format, specifically related to quantum mechanics. The equations involve wavefunctions (ψm\psi_m), coefficients (Cn(t)C_n(t)), Hamiltonians (H1H_1), and time evolution expressions.

Key elements:

  1. The Schrödinger equation in the interaction picture.
  2. Expansion in terms of basis states ψn\psi_n.
  3. The time evolution of coefficients Cn(t)C_n(t).
  4. Matrix elements of the perturbation Hamiltonian H1H_1.
  5. The final boxed equation represents the differential equation governing Cn(t)C_n(t), which is common in time-dependent perturbation theory.

Would you like me to extract and format this in LaTeX for better readability?”

and when I answer yes, I get this:

“Here is the LaTeX code for the equations in your image:

\documentclass{article}
\usepackage{amsmath}

\begin{document}

\begin{equation}
\langle \psi_m | \times \left( i\hbar \sum_n \dot{C}_n(t)e^{-iE_n^{(0)}t/\hbar} | \psi_n \rangle = \sum_n H_1 C_n(t)e^{-iE_n^{(0)}t/\hbar} | \psi_n \rangle \right)
\end{equation}

\begin{equation}
i\hbar \sum_n \dot{C}_n(t)e^{-iE_n^{(0)}t/\hbar} \delta_{m,n} = \sum_n C_n(t)e^{-iE_n^{(0)}t/\hbar} \langle \psi_m | H_1 | \psi_n \rangle
\end{equation}

\begin{equation}
i\hbar \dot{C}_m(t) = \sum_n C_n(t)e^{i(E_m^{(0)} – E_n^{(0)})t/\hbar} \langle \psi_m | H_1 | \psi_n \rangle
\end{equation}

\begin{equation}
\boxed{ \dot{C}_m(t) = -\frac{i}{\hbar} \sum_n C_n(t)e^{i\omega_{mn}t} \langle \psi_m | H_1 | \psi_n \rangle }
\end{equation}

\end{document}

Explanation:

  • The equations describe the time evolution of coefficients Cn(t)C_n(t) in quantum mechanics.
  • The boxed equation is a common result in time-dependent perturbation theory.
  • The notation ⟨ψm∣H1∣ψn⟩\langle \psi_m | H_1 | \psi_n \rangle represents the matrix elements of the perturbation Hamiltonian.

You can compile this LaTeX code in Overleaf or any LaTeX editor to reproduce the formatted output. Let me know if you need modifications! 😊”

————————————————————————————————————————————————————————–
To go into more detail about producing this type of content in a manner that is most friendly to screen readers:

When it comes to LaTeX accessibility, the most recommended tool is Pandoc, which allows you to convert your LaTeX documents into HTML with MathML support, ensuring screen readers can interpret mathematical content effectively; this is considered the best way to make LaTeX documents accessible, especially for complex math equations. 

Key points about using Pandoc for LaTeX accessibility:
  • Conversion to HTML:
    Pandoc can convert your LaTeX file directly into HTML, which is generally better supported by screen readers than native PDF output from LaTeX.
  • MathML support:
    When converting to HTML, Pandoc can translate your LaTeX math expressions into MathML, a standard format for representing mathematical notation that is well-read by screen readers.
  • Cross-platform compatibility:

    Pandoc is available across various operating systems, making it widely accessible. 

Other considerations for LaTeX accessibility:
  • Alt text:
    Always provide descriptive alt text for images and figures within your LaTeX document.
  • Semantic structure:
    Use appropriate LaTeX commands for headings, lists, and other structural elements to improve screen reader navigation.
  • Metadata inclusion:
    Add relevant metadata like document title, author, and keywords to enhance accessibility. 

 

Given this information, I went back to ChatGPT and asked if it could render it into html for me.  It gave me this code:

<!DOCTYPE html>
<html lang=”en”>
<head>
<meta charset=”UTF-8″>
<meta name=”viewport” content=”width=device-width, initial-scale=1.0″>
<title>Quantum Mechanics Equations</title>
<script type=”text/javascript” async
src=”https://polyfill.io/v3/polyfill.min.js?features=es6″>
</script>
<script type=”text/javascript” async
src=”https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js”>
</script>
</head>
<body>

<h2>Quantum Mechanics Equations</h2>

<p>
\[
\langle \psi_m | \times \left( i\hbar \sum_n \dot{C}_n(t)e^{-iE_n^{(0)}t/\hbar} | \psi_n \rangle = \sum_n H_1 C_n(t)e^{-iE_n^{(0)}t/\hbar} | \psi_n \rangle \right)
\]
</p>

<p>
\[
i\hbar \sum_n \dot{C}_n(t)e^{-iE_n^{(0)}t/\hbar} \delta_{m,n} = \sum_n C_n(t)e^{-iE_n^{(0)}t/\hbar} \langle \psi_m | H_1 | \psi_n \rangle
\]
</p>

<p>
\[
i\hbar \dot{C}_m(t) = \sum_n C_n(t)e^{i(E_m^{(0)} – E_n^{(0)})t/\hbar} \langle \psi_m | H_1 | \psi_n \rangle
\]
</p>

<p>
\[
\boxed{ \dot{C}_m(t) = -\frac{i}{\hbar} \sum_n C_n(t)e^{i\omega_{mn}t} \langle \psi_m | H_1 | \psi_n \rangle }
\]
\]
</p>

</body>
</html>”

which when copied into a index.html file looks like this:

 

picture of the html render of the latex example.

Which I believe looks just like the original image:

a sample of latex

 

What is the point?  This started with a picture and ended with screen readable html.

 We can do this lift.

one more thing….

I tried to find handwritten math notes online and had a really difficult time.  Most everything that came up was stock images.  So I ended up drawing out the last line of the above formula.

handdrawn math from the same formula we were using before.

 

Dropping this onto ChatGPT gave me this:

The handwritten equation in the image appears to be:

a screen shot of the result of the interpretation

which also gave me the html code resulting in:

screen shot of the html code

Feel free to send me handwritten notes to play with!

 

 

Brightspace: Advanced Assessments

Previously Titled: New Feature in Brightspace: Graders and Grade Release Permissions

 

Step by Step Advance Assessments

Create an assignment in Brightspace

Go to the Evaluations & Feedback side panel and expand it to see the options.

visual of side panels associated with assignements

expanded evaluation & feedback vidual

These options, depending on what you pick, reveal additional options.

Click on Select Evaluators (you can choose from the people enrolled in your course with a grading permissions role.)

picture of the select evaluators area of brightspace

Choose how evaluations will process with multiple evaluators.  You can choose between:

  • All evaluators work on the same evaluation together. Publishers release this evaluation to learners.
  • Evaluators work on their own evaluations separately. Publishers aggregate results into one final evaluation and release it to learners.

screenshot of the stated evaluation options

IF you have multiple evaluators: Allocate Evaluators to Learners (select which graders will grade which students)

 

the pull down menu lets you choose from

  • Allocate all evaluators to every learner
  • Randomly allocate 1 evaluator to every learner evenly

screenshot of the allocation methods

You can manually finese these generic selections by using the Evaluator Allocations options area and checking or unchecking students for a given evaluator.

If you have new students come in to the course, Brightspace will follow the automatic allocation method you picked for the new learner enrollments.

Choose publishers:

screenshot of the select publishers area

 

Important: It’s best to configure these settings when creating your assignment. If that’s not possible, make any necessary adjustments before receiving submissions. Once a student has been graded, these settings become locked and cannot be changed. Additionally, if submissions exist but no students have been graded, removing all evaluators will permanently disable advanced assessment features. This means grading will revert to the standard assignment process as it existed before these features were introduced to Brightspace.

 

 

notebooklm icon to indicate chat featureChat with my Advanced Assessments NotebookLM about this subject if you have any questions. You must use your SBU gmail to use this feature. (I am learning how to do this and may need to add each person who wants to use this chat. Feel free to reach out with a request if that turns out to be the case.)

 

Deep Dive by NotebookLM: Listen to the Audio here.

 

Sora: A First Shot

If you pay for ChatGPT, you now have access  to Sora, their video creation AI application.  This is my first attempt…   It’s a bird, it’s a plane, it’s super goat!

 

Prompt: a herd of mountain goats can be seen charging up the side of a rocky hill. One of the ones in the front is carrying a ran banner. it is very dramatic and the camera zooms in as they get to the top of the hill.

Emerging Tech for a Changing Edu

Skip to toolbar