When Accessibility Issues in Google Apps Require External Solutions

  1. Complex Navigation or Interaction Challenges:
    • Google apps generally follow WCAG standards, but certain complex workflows (e.g., navigating large spreadsheets or documents with multiple interactive elements) may not fully support screen readers or keyboard-only navigation.
    • Example: Managing extensive pivot tables in Google Sheets may not be fully screen-reader-friendly, requiring a switch to Microsoft Excel or a tool like JAWS for better accessibility.
  2. Non-Editable Content Embedded in Google Docs or Slides:
    • If you embed non-accessible elements such as images without alt text, charts, or external media with no captions, these cannot always be fully fixed inside Google apps.
    • Solution: Use external software like Adobe Acrobat Pro to enhance accessibility in exported PDFs or a dedicated media editor to caption videos.
  3. Lack of Advanced Semantic Structuring:
    • Google Docs supports headings, but for advanced semantic markup (e.g., ARIA roles or custom landmarks), its capabilities are limited. If you need to ensure accessibility for complex documents, moving to Microsoft Word or HTML editors with ARIA support is necessary.
  4. Inadequate Support for Custom Accessibility Features:
    • Custom interactive features (e.g., custom menus or JavaScript-like interactions in Google Slides) may lack robust ARIA tagging or keyboard navigation.
    • Solution: Transition to software like PowerPoint or InDesign, which allows detailed accessibility tagging.
  5. Color Contrast and Design Issues:
    • Google Sheets and Slides allow some color customization, but there are no built-in tools to check and enforce WCAG-compliant contrast ratios.
    • Solution: Use external contrast-checking tools or design software with integrated accessibility checks (e.g., Figma or Adobe XD).
  6. Non-Compliant Form Design in Google Forms:
    • Google Forms often falls short in supporting accessible error identification, labels, or custom ARIA roles. Advanced forms with full WCAG compliance may require a platform like Microsoft Forms or a web development environment.
  7. No Detailed Accessibility Auditing:
    • Google Workspace apps do not provide native tools for auditing accessibility across documents or designs.
    • Solution: Use external auditing tools such as WAVE or Axe to identify and resolve WCAG violations.
  8. Language and Multilingual Accessibility:
    • Google Docs supports some basic language tagging but cannot fully address accessibility for multilingual documents with specific pronunciation or semantic needs.
    • Solution: Use advanced editing tools that allow precise language tagging, like Microsoft Word or InDesign.

How to Decide When to Move to a Different Program

You should consider moving away from Google apps when:

  • The task involves a critical WCAG violation that Google Workspace cannot address (e.g., no alt-text for key visuals or inaccessible interactive content).
  • External auditing tools consistently flag issues that cannot be resolved using Google’s built-in accessibility options.
  • You need advanced customization or integration of assistive technologies (e.g., full ARIA implementation, detailed tagging, or robust form accessibility).

 

Example workflows:

1. Document Accessibility in Microsoft Word

  • Why Word? Microsoft Word has advanced accessibility tools, including a built-in Accessibility Checker and robust support for semantic structure (e.g., heading levels, alt text for images, table summaries).
  • Example Workflow:
    1. Use Word’s Styles feature to apply semantic headings (Heading 1, Heading 2, etc.).
    2. Add alt text to images by right-clicking the image > Edit Alt Text.
    3. Use the Accessibility Checker (Review tab) to identify and fix issues such as missing headings or poor color contrast.
    4. Export to a tagged, accessible PDF using Save As PDF with the “Create Accessible PDF” option.

2. Presentation Accessibility in Microsoft PowerPoint

  • Why PowerPoint? PowerPoint includes tools to ensure slide layouts are screen-reader-friendly, proper reading order, and video captioning.
  • Example Workflow:
    1. Use built-in slide layouts (e.g., Title Slide, Content Slide) to ensure proper semantic structure for screen readers.
    2. Add alt text to all images, charts, and graphs.
    3. Use the Accessibility Checker (Review tab) to catch issues like insufficient text contrast.
    4. Add closed captions or subtitles to embedded videos via the Insert Captions feature.

3. Data Accessibility in Microsoft Excel

  • Why Excel? Excel is more screen-reader-friendly than Google Sheets and offers features like named ranges and accessible pivot tables.
  • Example Workflow:
    1. Use Named Ranges instead of referencing raw cell coordinates (e.g., =SUM(SalesData)).
    2. Add descriptive headers to all columns and rows for better navigation.
    3. Use the Accessibility Checker to identify common issues like merged cells, which can confuse assistive technologies.
    4. Export to a PDF with accessibility tags to retain structure.

4. Accessible PDFs with Adobe Acrobat Pro

  • Why Acrobat Pro? It’s the industry standard for ensuring that PDFs are fully accessible, with tools for tagging, alt text, and reading order adjustments.
  • Example Workflow:
    1. Import your document from Word or PowerPoint and open it in Acrobat.
    2. Use the Accessibility Tool to run a full check of your document.
    3. Fix issues such as missing tags, reading order problems, and alt text directly in Acrobat.
    4. Verify compliance with the built-in Accessibility Checker.

5. Form Accessibility in Microsoft Forms or JotForm

  • Why These Tools? These platforms provide greater control over form labels, field instructions, and error messages than Google Forms.
  • Example Workflow:
    1. Add clear and descriptive labels to every form field.
    2. Include placeholders or instructions for input requirements (e.g., “Enter your 10-digit phone number”).
    3. Enable validation for required fields with error messages.
    4. Test the form with screen readers to confirm navigation and readability.

6. Design Accessibility in Figma or Adobe XD

  • Why Figma/Adobe XD? These tools allow detailed accessibility checks during the design process, including color contrast and interactive elements.
  • Example Workflow:
    1. Use plugins like Contrast or Stark to ensure your color palette meets WCAG contrast requirements.
    2. Annotate designs with accessibility notes (e.g., “This button must have ARIA role=’button'”).
    3. Export designs with accessible descriptions for developers to implement.

7. Website Accessibility with WordPress

  • Why WordPress? WordPress supports accessibility-ready themes and plugins to ensure WCAG compliance for websites.
  • Example Workflow:
    1. Choose an accessibility-ready theme (marked in the theme directory).
    2. Use plugins like WP Accessibility to add ARIA roles, skip links, and other enhancements.
    3. Test the site with tools like WAVE or Axe to identify accessibility barriers.

8. Interactive Accessibility in ARIA-Supported Development Environments

  • Why Use ARIA? For custom interactive content, ARIA (Accessible Rich Internet Applications) helps ensure assistive technologies can interact with dynamic elements.
  • Example Workflow:
    1. Develop interactive content in HTML using ARIA roles (e.g., role=”dialog”).
    2. Use tools like Deque Axe or Google Lighthouse to test ARIA implementation.
    3. Include keyboard navigation testing to ensure all users can interact with your content.

 

Are you using Federal money in your work at SBU?

close of image of a hand signing a form

 

You may have personally signed to assure that these standards are being met. The text in federal grants typically references compliance with federal accessibility laws, such as the **Rehabilitation Act of 1973 (Sections 504 and 508)** and the **Americans with Disabilities Act (ADA)**. When an individual or institution applies for or accepts federal grant money, they agree to meet certain accessibility standards. While the exact wording may vary across different grants and agencies, the agreement usually includes language to the following effect:

### Example General Language in Federal Grant Agreements:

1. **Compliance with Federal Laws:**
“The recipient agrees to comply with all applicable federal laws, including the Rehabilitation Act of 1973, as amended (29 U.S.C. §§ 794 and 794d), which prohibit discrimination on the basis of disability in programs and activities receiving federal financial assistance and require the provision of accessible electronic and information technology.”

2. **Affirmation of Accessibility Efforts:**
“The recipient certifies that all programs, services, activities, and facilities funded under this grant will be accessible to and usable by individuals with disabilities in accordance with applicable laws, regulations, and guidelines.”

3. **Continuous Accessibility Improvement:**
“The recipient affirms its ongoing commitment to advancing accessibility in its programs and services and to the development and implementation of strategies to achieve compliance with accessibility standards, including but not limited to WCAG 2.1 AA, Section 508, and ADA Title II and Title III.”

4. **Reporting and Accountability:**
“The recipient agrees to monitor compliance with accessibility requirements and to submit periodic reports as requested by the funding agency to demonstrate ongoing efforts and progress toward full accessibility.”

### Key Takeaways:
– **Federal Compliance:** The university must demonstrate compliance with Section 504 (general nondiscrimination) and Section 508 (electronic accessibility for federal funds recipients).
– **Commitment to Accessibility:** Grant recipients affirm that they are actively working toward full accessibility in all operations impacted by federal funding.
– **Accountability:** Granting agencies may require documentation or assurances demonstrating these efforts.

 

And of course, even if you yourself have not signed such an agreement, the university itself takes federal money to operate. At least for now.

Getting PDFs ready for Accessibilty Requirements

AI created attention grabber - decorative

The compliance of scanned and OCRed files from Adobe Acrobat Pro with accessibility standards depends on several factors, especially when dealing with complex layouts like columns and tables. Here’s how these elements fare:

1. Text Recognition (OCR) Accuracy

  • Adobe Acrobat Pro’s OCR is generally reliable for converting scanned images into editable and searchable text.
  • Challenges with Columns: OCR might misinterpret multi-column layouts, reading them linearly rather than by column.
  • Challenges with Tables: OCR may struggle to preserve the structure of tables, often interpreting them as unstructured text.

2. Tagging and Accessibility

Acrobat Pro can automatically tag OCRed documents, but the tags may not always be accurate, especially for complex layouts:

  • Columns: Acrobat might not detect column order correctly, causing screen readers to read content in the wrong sequence.
  • Tables: The software often fails to generate proper table tags, leading to a loss of row and column relationships crucial for screen reader users.

3. Alt Text for Images

  • Scanned documents often include graphical elements, which Acrobat cannot automatically assign alt text to. You must manually add descriptive alt text for meaningful images.

4. Reading Order

  • Acrobat’s “Reading Order” tool is essential to correct the logical reading sequence, especially in multi-column and table-heavy documents.
  • Default reading order for OCRed files may require significant manual adjustments to ensure compliance.

5. Compliance with Accessibility Standards

To meet accessibility standards like WCAG 2.1 or Section 508, additional steps are often necessary:

  • Manually Adjust Tags: Verify and edit tags to accurately reflect document structure, including headings, lists, tables, and columns.
  • Use Acrobat’s Accessibility Checker: This tool helps identify and fix accessibility issues but may not catch all problems in complex layouts.
  • Supplement with Manual Efforts: Complex documents may require manual remediation with tools like Adobe Acrobat or third-party software specialized in accessibility.

Best Practices for Improving Compliance

  1. Pre-OCR Processing: Clean up scanned files to enhance OCR accuracy (e.g., ensuring straight scans, good contrast, and minimal noise).
  2. Use Proper OCR Settings: Select the correct language and enable the “Recognize as Table” option where applicable.
  3. Manually Review Tags: After OCR, manually inspect and adjust tags for accurate representation of document structure.
  4. Simplify Layouts: If possible, avoid overly complex layouts in scanned documents to minimize accessibility challenges.

By taking these additional steps, you can significantly improve the compliance of scanned and OCRed documents, even with complex layouts.

 

Transfigurations on Musings

Cosmic whale for cosmic writing

Hello, my friends. If you, like me, have ever gazed into the cosmos of thought and marveled at the boundless intersections of science, technology, and human understanding, then you’re in for a journey. Today, we embark on a thoughtful exploration inspired by the writings of Jennifer L. Adams—a thinker deeply entrenched in the realm of higher education, where technology and learning converge like celestial bodies in orbit. Her central question is both provocative and profound: Is artificial intelligence truly that different from how our own minds work?

Such a question beckons us to consider the intricate dance of memory, intelligence, and pattern recognition, and to marvel at their manifestations both natural and artificial. Adams begins her inquiry not in a laboratory or lecture hall, but in a bathtub—a setting both humble and evocative, echoing Archimedes himself. She watches whirlpools form and dissipate, contemplating the microscopic life swirling in these temporary eddies. Her curiosity takes her to a surprising discovery: slime mold.

Ah, slime mold—a single-celled organism seemingly so simple, yet capable of navigating mazes and anticipating environmental changes. Imagine: a creature with no brain, no nervous system, no neurons, yet it remembers. Its memory, Adams suggests, may be chemical, a fundamental organization of matter with purpose. It is here, in this primal intelligence, that we are invited to see echoes of artificial intelligence.

Adams draws a parallel to large language models (LLMs), like GPT-4. These models, too, operate without consciousness, yet they predict patterns and generate responses so human-like that they often blur the line between machine and mind. Consider this: when tasked with responding to a zoo worker’s query, the AI adapts, contextualizes, and personalizes its response. It mirrors the dynamic complexity of thought, much as the slime mold mirrors memory.

But Adams doesn’t stop at algorithms. She speculates on the broader implications of intelligence—animal, artificial, and human. She recounts the intricate songs of whales, passed down through generations, a kind of aquatic epic encoded in soundwaves. Could their communication represent an organic language model, evolved naturally and independently of human cognition? What might these songs tell us about their history, their emotions, their view of the universe?

This thought invites an even deeper question: if intelligence emerges in myriad forms—from the chemical traces of slime molds to the silicon networks of AI—what truly defines intelligence? Is it memory? Pattern recognition? Adaptability? Or something ineffable, like the capacity for wonder or the ability to ask questions about existence itself?

Adams provocatively ties this inquiry back to the classroom, to the very essence of learning. Imagine a world where AI personalizes education for every learner, a virtual tutor attuned to the unique pathways of each student’s mind. Yet here, she invokes a cautionary principle: the Prime Directive from Star Trek, a reminder that with great power comes great responsibility. How do we harness AI to amplify human potential without losing what makes learning an inherently human endeavor?

The bathtub becomes a metaphor for our role in this vast experiment. As Adams muses about pulling the plug, ending the microcosmic swirl of life, we are reminded of the fragility of discovery, the delicacy of choice. How we engage with AI, how we integrate it into education, and how we define its role in our society will shape not only our future but our very understanding of intelligence itself.

So, as we stare into the starry vastness of possibility, let us ponder: What if AI is not merely a tool, but a mirror? A mirror reflecting our own creativity, our capacity for connection, our endless curiosity? And in that reflection, perhaps we might better understand ourselves—not as isolated beings, but as part of a vast and intricate cosmos, forever learning, forever exploring.

Stay curious, my friends. The universe awaits.

Musings from the Bath

picture of a whale from the article about whales mentioned towards the bottom of the post.

There is a bit of fiction within the Star Trek universe, that really made an impact of me.  It is tied into the prime directive:

The Prime Directive in Star Trek, also known as General Order 1, is a fundamental principle of the United Federation of Planets. It prohibits Starfleet personnel from interfering with the natural development of alien civilizations. Specifically, it forbids: Interference with Pre-Warp Civilizations: Starfleet cannot reveal its presence or provide advanced technology to civilizations that have not yet developed faster-than-light (warp) travel. This is to prevent cultural contamination and ensure that societies evolve naturally. Interference in Internal Affairs: Starfleet is not allowed to intervene in the internal political, social, or economic matters of any society, even if the civilization is warp-capable.

This is a directive that is often dismissed in the heat of an episode (particularly by Captain Kirk)… but in one movie, Star Trek II: The Wrath of Khan, it shows up as a crew of a science vessel is having a damn hard time finding a planet that has absolutely zero life on it.  This concept, when we live in world view where people wonder if life exists anywhere else at all in the universe, was extremely captivating  to me.

So I sat in the bath this morning, and watched as tiny and fleeting whirling vortexes formed when i lower my arms quickly into the water.  I wondered if there was microscopic life in the water that has been doing it’s thing in this newly created body of water – maybe organizing, reproducing, eating – that had just been swirled into a chaotic storm. I wondered, if this microscopic stuff, might have thought something like “what the?” as everything was displaced around it.  Which made me think of language.

I asked Alexa (yes I have an device in my bathroom – one with a nice speaker of course): What is the most primitive life form that we know has a memory.  I asked this, because I thought that proving one has memory, proves some organization of thought – at least to me.  I’ll accept arguments of course.

So the answer is slime mold; which to be honest is a much more primitive life form than I was expecting the answer to be. Turns out that these single-celled organisms can remember their way through mazes, and anticipate changes in temperature and light if they happen in a pattern. They do not have brains or even a nervous system.  Other tiny primitive organisms that show memory include bacterias, ciliates and other more complicated, yet distinctly un “animal” like, plants and fungi.

So, AIs are using what is called a Large Language Model to appear intelligent.  These break down vast amounts of training data to analyze what tiny bit of language is the most likely bit to come next.  For example, if I write:

Hello, how are

Just as your phones some predictive text so you don’t have to text as many letter  – you can see the most likely word could be

you?

If you are told that the person speaking is a 35 year old person who works at the zoo taking care of parrots and they are speaking to there coworker as they change shifts (changing the query) you might still guess

you?

or you might think

the parrots?

Of course, GPT 4o is a blabber mouth by default so when I ask it to finish the sentence

Hello, how are

It says

you doing today? I hope you’re having a great day! 😊

When I give it the query that I mentioned about, with the speaker being a parrot caregiver at the zoo, it responds:

you? I hope your shift went smoothly—any updates or things I should know about the parrots today?

Demonstrating its politeness and verbosity.

Anyway, back to my bath, I am scrubbing up, thinking about whether I think, therefore, I am – only because I have a runaway LLM running inside of my head. Blah blah blahing at me – and I have a chemical system that remembers things.  I remember myself blabbing.  I remember conclusions I have come to.  I remember patterns.  I remember grudges and fondnesses. (sometimes I can’t quite remember why I like or dislike someone  – but I know I do.  Similarly, I might not remember what happens in a book, but I can tell you if I really liked it. This will come in handy for retirement rereads.)

I remember me.

“If You Think You Can Hold a Grudge, Consider the Crow”  New York Times Article. 

[side note – did you know you can read the NYTimes as a person with an .edu email address?]

Do Animals who use sounds to communicate use a small or midsized language model?  To be sure, a chemical one.  As we as humans may be using a chemical LLM?

Did you hear about the whales yet?

I’m going somewhere with all of this.

I promise.  But I won’t be finishing these thoughts in today’s blog.

Maybe my LLM can interact with your LLM and we can make some new ideas together?

Oh – and to tie things up a bit.  If I had never drained my bath, would something eventually have crawled out of it, wondered about the planet and started a new civilization?  We will never know, because similar to what Captain Kirk may have done, I pulled out the drain when I was finished.

Oh – and definitely check out the NotebookLM podcast generated based on this post.

 

“It’s like AI is making us look in the mirror” 

 

More reason to think about how we use the internet

I was asking GPT about a trip to the Grand Canyon as a part of a demonstration today, and I didn’t read what it actually responded at the time.  I was just closing open tabs in my browser,  and I read a bit of it before closing the tab and after the usual travel tips like use flexible travel dates, consider alternative airports, take budget airlines… the last one really got my attention:

“Clear your cookies: Some airlines and flight comparison websites use cookies to track your browsing history and raise prices if they see that you’re interested in a particular route. Clearing your cookies can help you avoid this price hike.”

I knew that my interest in a particular destination was noted – as my browser will become crowded with ads about the place I was investigating, but it never occured to me that the would start offering me higher costs just because it knows I want to go there.

That is like knowing that a car that is approaching your gas station is runing on no gas, and raising the cost just before you coast into the station.

decorative image of a plane on a runway with dollar signs on the ground.

Quick GIT instructions (but I recommend using GIT Desktop – cause I like GUIs)

title image – mostly decorative

Mini Lecture: Using Git and GitHub in the Terminal for Windows, Linux, and macOS

1. Initial Setup

  • Install Git: Ensure Git is installed.
    • Windows: Download from git-scm.com and follow the installer.
    • Linux: Run sudo apt install git (Debian/Ubuntu) or sudo dnf install git (Fedora).
    • macOS: Use brew install git if Homebrew is installed or download directly from git-scm.com.
  • Configure Git: Set your global username and email:
    git config --global user.name "Your Name"
    git config --global user.email "your.email@example.com"
  • Set Up Authentication with SSH:
    • Generate an SSH Key:
      ssh-keygen -t rsa -b 4096 -C "your.email@example.com"

      Follow the prompts and press Enter to accept the default file location.

    • Add the SSH Key to the SSH Agent:
      eval $(ssh-agent -s)
      ssh-add ~/.ssh/id_rsa
    • Add Your SSH Key to GitHub:
      1. Copy the public key to your clipboard:
        cat ~/.ssh/id_rsa.pub | pbcopy  # Use 'xclip' or manual copy on Linux
      2. Go to GitHub SSH settings and add a new SSH key.
    • Test the Connection:
      ssh -T git@github.com

2. Cloning a Repository

To work on an existing project, clone the repository:

# Replace with your repository URL
git clone https://github.com/username/repository.git

Or, if using SSH:

git clone git@github.com:username/repository.git

Navigate into the project directory:

cd repository

3. Basic Git Workflow

  • Check the status of your repository:
    git status
  • Stage changes (add files to the staging area):
    git add .  # Stages all modified files
    git add filename.ext  # Stages a specific file
  • Commit changes with a message:
    git commit -m "Your commit message"
  • Push changes to GitHub:
    git push origin main  # Replace 'main' with the appropriate branch name

4. Creating a New Repository

  • Initialize a new repository:
    git init
  • Connect to a remote GitHub repository:
    git remote add origin https://github.com/username/new-repository.git

    Or, if using SSH:

    git remote add origin git@github.com:username/new-repository.git
  • Add and commit your initial project files:
    git add .
    git commit -m "Initial commit"
  • Push to GitHub:
    git branch -M main  # Ensure the branch is named 'main'
    git push -u origin main

5. Branching and Merging

  • Create a new branch:
    git checkout -b new-branch
  • Switch between branches:
    git checkout main
  • Merge a branch into your current branch:
    git merge new-branch

6. Handling Merge Conflicts

If you encounter merge conflicts:

  • Git will mark conflicts in your files. Open the files in a text editor and manually resolve the conflicts.
  • Once resolved, stage the changes:
    git add resolved-file.ext
  • Commit the merge:
    git commit -m "Resolved merge conflict"

7. Pulling Updates

To keep your local repository up-to-date with the remote:

git pull origin main  # Replace 'main' with your branch name

8. Additional Useful Commands

  • View commit history:
    git log
  • Undo the last commit (without losing changes):
    git reset --soft HEAD~1
  • Remove files from the staging area:
    git reset filename.ext

Summary

This mini lecture has outlined essential commands for using Git and GitHub across Windows, Linux, and macOS. Remember to commit often, write meaningful commit messages, and stay consistent with Git best practices.

 

concluding slide image – mostly decorative that restates text 

Researcher? Apply to utilize Empire AI

Dear Stony Brook Faculty and Researchers:

Earlier this spring, New York State and six academic partners, including the State University of New York (SUNY), committed to create Empire AI*, a GPU supercluster for AI and high-performance computing academic research. Stony Brook faculty member Dr. Robert Harrison is serving as the interim director of Empire AI and can be a useful resource to faculty interested in exploring this resource. An early version of Empire AI’s research computing instrument, called “Alpha,” is coming online by early November 2024. The hardware specifications for Alpha are outlined at the end of this email message. While the ultimate Empire AI instrument will far outstrip Alpha’s specifications, our research communities can now start to use this shared resource. Alpha, should enable Stony Brook to do research we otherwise could not with existing resources.

Empire AI Consortium, Inc. welcomes SUNY  faculty and researchers to request time to run a research project using Alpha.
  • HOW: Please submit brief work order requests (WORs) to Empire AI via this secure online form. This short form asks for a paragraph description of your research project and five questions about your resource needs.
  • WHEN: You may submit your work order request starting October 3 through October 31, 2024. When you submit WORs within this time window has no bearing on when and whether the project is run on Alpha.
  • WHAT: Research compute jobs can be of any CPU/GPU-scale and duration, given the hardware specifications, guidelines, and context below.
  • WHY EARLY ADOPTERS: Alpha will be in start-up mode during its initial months of operation, meaning early users will be expected to help work out configuration issues, and ensure the necessary software is installed.
Here are some further guidelines and context:
  • Empire AI will allocate resources for this initial phase of Alpha following these operating principles: To achieve equity of usage across the six academic systems sharing the instrument, and to maximize utilization of Alpha’s capacity.
  • Priority will be given to projects that best harness the capabilities of Alpha.
  • Depending on the total number of work orders submitted by any one academic institution and the distribution of work orders submitted among institutions, Empire AI may ask a university to prioritize its corresponding work order requests, following whatever procedure that academic institution chooses.
  • When considering appropriate research data for use of Alpha, note that Alpha is neither HIPAA nor NIST-800-171 compliant.
  • Empire AI expects to make Alpha available in this fashion for about one year (i.e., through approximately November 2025), subject to change with notice to the user community.
Consider this call for Empire AI WORs as just the first one. There may be subsequent calls as we gain more experience in using Alpha after this start-up phase. The process for soliciting work orders may change as well.

Finally, it is through the generosity of the Simons Foundation and its Flatiron Institute that Empire AI can so rapidly provide access to an initial instrument for our shared research use.  Through these means, there are no institutional or user fees for running compute jobs on Alpha.  We are piloting Alpha to understand user demand, usage patterns, classes of users, and types of jobs and workloads. Empire AI and Stony Brook University will use our collective experience with Alpha to plan user fees and resource allocation for future builds of Empire AI.

If you have any questions, please contact Robert Harrison at Robert.Harrison@stonybrook.edu.

Please consider submitting a work order to take advantage of this opportunity. I sincerely hope that Stony Brook alone overwhelms the Alpha system, but in any case it will help us understand the need for this and additional GPU resources.

Kevin Gardner
Vice President for Research
Stony Brook University
kevin.gardner@stonybrook.edu
O: 631.632.7932 | M: 603.767.4654 |


Empire AI: Alpha Hardware Specifications

12 HGX Nodes

  • 8 H100 80GB GPUs
  • 10 400Gb/s ConnectX-7 NIC Cards (8 for IB and 2 for Ethernet)
  • 30TB NVMe caching space
  • 2TB of system memory
Non-blocking NDR fabric cabled for rail configuration
  • 8 network switches, 96 optical connections
4 service nodes
  • 2 login nodes and 2 cluster management nodes (NVIDIA Base Command with licenses for all gear)
2PB of DDN Storage
  • 4 x 720TB Flash storage (home directories, training data, snap shots)
*Empire AI Consortium, Inc.’s purpose as a non-profit corporation is to develop and oversee a shared high performance computing facility that will promote responsible research and development, including the advancement of the ethical and public interest uses of artificial intelligence technologies in New York.

Emerging Tech for a Changing Edu

Skip to toolbar