UNT Digital Libraries and the Portal to Texas History are starting to test the 
waters here too with a ton of content to catch up on. Early days.  

Vendors: We've tested 3Play and then rev.com.  At the latest Accessing Higher 
Ground (AHG) conference, the latter was getting talked up a lot by ODA office 
folks as their current preferred vendor given speedy turnaround and cost ratio. 

Automation: I've played with https://github.com/agermanidis/autosub with 
decent~ish output given a few test cases. I know there are a few amazon-related 
demos out there too. No formal workflows on my end yet, but I think your 
outlined approach is generally what my preferred option  would look like too. 
Hope to hear more from you/others on what they are trying. 

digression: I note a handful of folks I talked to at AHG didn't think OCRing 
text in image content was good enough for real compliance when they saw the 
gibberish it often spits out, which would lead me to believe automated efforts 
for A/V would leave us open to the same sorts of complaints, but we do what we 
can, right?). Also captions/transcriptions are only going to get us 1/2 way to 
WCAG AA given the need for audio-descriptions. Maybe text-to-speech here? 3play 
has a plugin along those lines.

Other issues on my plate: looking at cleanup and audio-desc. Script authoring 
(probably will use WGBH Cadet); other outliers like doing webvtt chapters; what 
webvtts should look like for music where you want to give substantial info (i.e 
 movements in symphonies, describing affect in a French Aria, or 
audio-describing a performance with something better than "[jazz music 
playing]"; and tangentially to your original question: what does it look like 
to hire/contract and ASL signer to make derivative files to meet that need 
if/when it comes up. 

As to storage, our webvtts are going into a local gitlab repo, and then we have 
a few local scripts to push them onto public DL filesystem. I haphazardly dream 
of a future scenario where the DL public interface provided links from 
automated transcripts to the git repo for some sort of crowdsource cleanup 
effort. Side note: ODA office folks looked at me with a lot of puzzlement when 
I asked how they were archiving/storing captioned media!

For now at least, non captioned A/V have links in their descriptive records to 
make requests, which we'll typically honor ASAP with a vendor supplied file. 
https://texashistory.unt.edu/ark:/67531/metadc700196/ (see sidebar for request 
link). For now this just populates a simple webform with some boilerplate.

Interested if you can share more of what you are up to.

Cheers,

William Hicks
 
Digital Libraries: User Interfaces
University of North Texas
1155 Union Circle #305190
Denton, TX 76203-5017
 
email: william.hi...@unt.edu  | phone: 940.891.6703 | web: 
http://www.library.unt.edu
Willis Library, Room 321
 
 



On 2/11/19, 4:02 PM, "Code for Libraries on behalf of Goben, Abigail H" 
<CODE4LIB@LISTS.CLIR.ORG on behalf of ago...@uic.edu> wrote:

    I can't speak to captioning but I use temi.com for my transcription for the 
class that I teach. It's .10 a minute, it's machine-transcription.   Overall it 
does a really decent job and I can't argue with the price.  The transcription 
takes about half the time of the video, I do light editing, and post.  
    
    -- 
    Abigail H. Goben, MLS
    Associate Professor
    Information Services and Liaison Librarian
    
    Library of the Health Sciences
    University of Illinois at Chicago
    1750 W. Polk (MC 763)
    Chicago, IL 60612
    ago...@uic.edu 
    
    
    -----Original Message-----
    From: Code for Libraries [mailto:CODE4LIB@LISTS.CLIR.ORG] On Behalf Of Kate 
Deibel
    Sent: Monday, February 11, 2019 1:37 PM
    To: CODE4LIB@LISTS.CLIR.ORG
    Subject: Re: [CODE4LIB] A/V and accessibility
    
    I'd love to hear what auto-captioning options you've found to be tolerable?
    
     What I can say is that this is the informal policy I've been promoting for 
accessibility in our special collections. In general, any accommodation 
requests in special collections will likely be part of a very nuanced, focused 
research agenda. Thus, any remediation will likely not only have to be specific 
to the individual's disability but also the nature of their research. In the 
case of A/V, a rough transcription may be enough if they are focusing more on 
the visual side of it. For others, though, a more thorough transcription may be 
required. 
    
    All in all, your approach sounds wise.
    
    Katherine Deibel | PhD
    Inclusion & Accessibility Librarian
    Syracuse University Libraries 
    T 315.443.7178
    kndei...@syr.edu
    222 Waverly Ave., Syracuse, NY 13244
    Syracuse University
    
    
    -----Original Message-----
    From: Code for Libraries <CODE4LIB@LISTS.CLIR.ORG> On Behalf Of Carol Kassel
    Sent: Monday, February 11, 2019 11:31 AM
    To: CODE4LIB@LISTS.CLIR.ORG
    Subject: [CODE4LIB] A/V and accessibility
    
    Hi,
    
    We're working on a roadmap for making A/V content from Special Collections 
accessible. For those of you who have been through this process, you know that 
one of the big-ticket items is captions and transcripts. In our exploration of 
options, we've found a couple of pretty good auto-captioning solutions. Their 
accuracy is about as good as what you'd get from performing OCR on scanned book 
pages, which libraries do all the time. One possibility is to perform 
auto-captioning on all items and then provide hand-captioning upon request for 
the specific items where a patron needs better captions.
    
    This idea will be better supported if we know what our peer institutions 
are doing... so what are you doing? Thanks to those to whom I've reached out 
personally; your information has helped tremendously. Now I'd like to find out 
from others how they've handled this issue.
    
    Thank you,
    
    Carol
    
    --
    Carol Kassel
    Senior Manager, Digital Library Infrastructure NYU Digital Library 
Technology Services c...@nyu.edu
    (212) 992-9246
    dlib.nyu.edu
    

Reply via email to