I recently shared a post on LinkedIn and Facebook, listing some lesser-known features that are built in to my screen reader that I find very helpful when I’m working. I thought I’d take a deeper dive into the features I feel increase my productivity and help me get the most out of my technology. Whether you work in the access technology field, teach blind students, or like me are a direct user of this tech, I hope this will be informative. So it’s time to start with Picture Smart.
What is JAWS Picture Smart?
Picture Smart is a feature that comes with the JAWS screen reader. Essentially, you can use it to get a short description of any image. Whether that’s an image on a website, a photo someone has emailed you, or images that appear as you’re browsing social media. There are so many uses for it and I feel like the more you use it, the more useful it becomes. One example of how I’ve used it is to get a description of a button on a website that didn’t have a useful label.
I’ve found it particularly useful when working. There are two main ways I use Picture Smart at work. The first is to get a description of my own video feed when I’m getting ready to join a Teams call. I want to be sure of what’s in my background before I join, and now I can check in a way that sighted people might do by glancing at their camera feed.
What are the descriptions like?
Here’s an example of what it gave me when I activated it on my own video feed when I was in Teams, starting with the short description that it gives you by default.
“The image shows a person with short hair, wearing a dark-coloured polo shirt and a large black headset with a microphone, seated in a room. The background includes a mostly bare wall with a door that has a hanging storage organizer. The lighting appears natural, coming from the front left of the person.”
You also have the option to ask specific questions about the image or to request a more detailed description. Here’s what I got when I requested the detailed description.
“The image shows a person sitting in what appears to be a home office or bedroom setting. They are wearing a dark coloured shirt or sweater and have large over-ear headphones on. The person has short, dark hair. They appear to be looking directly at the camera, suggesting this might be a screenshot from a video call or webcam.
In the background, you can see part of a white wall and what looks like a door frame on the left side. On the right side of the image, there’s a glimpse of what might be a hanging organizer or storage unit on the wall.
The lighting in the room seems to be natural, coming from a source not visible in the frame, giving the image a soft, even illumination.
Overall, the image gives the impression of someone set up for remote work or possibly online gaming, given the headphones and the casual home setting.”
So as you can see, this description was very detailed and really helped me get a sense of what others might see when I joined the call. If you’re using this feature or telling someone about it, you do need to be aware that it uses AI to generate these description. Some understanding of the limitations of large language models would be helpful.
Other uses for Picture Smart
The second way I use this feature is to read slides and other pieces of information that colleagues share when they screen share in a call. It’s important to understand that a screen reader can’t interact with the content on a shared screen, as you’re essentially being fed an image of someone else’s screen. However, by using Picture Smart I’ve been able to reliably read the content that people are sharing with me, which has helped me follow what is being discussed when I’m in a meeting.
Please don’t assume blind colleagues can or should do this in place of being sent the slides. Having a copy of the content that you can fully access is always a better solution, but this is a strategy for accessing information in situations where you wouldn’t have any access to it otherwise.
I’ve also used Picture Smart to identify photos on my computer that I’d saved with unhelpful file names. This enabled me to rename them to something that I would easily identify in future. It’s also a great way for me to get quick descriptions of images that I might want to investigate further when I’m doing research for my degree. I may want to ask for an actual person to describe them to me at a later date, but at least I can filter which images I think I’m likely to find most useful.
So whether I’m working or studying, or even browsing the Internet for fun, it’s such a great feature.
How do you enable Picture Smart?
Picture Smart requires you to use a series of keystrokes. First, you’ll want to press the JAWS (insert) key with the space bar. At that point, you should hear a clicking noise. You press P for picture smart, and then depending on what you want to do, you can choose from a series of keys. Here are some that I find most helpful.
- Enter: Describes the image based on the current application and context.
- F: Describes the current image you are on in file explorer.
- C: Describes the current control.
You can also add alt to any of these commands if you want to ask a question first. For example, you could press Insert space, P, alt enter and then ask a question before it described the image in context. For more information about Picture Smart or to read the full list of keystrokes, you’ll want to visit the what’s new in JAWS webpage and select JAWS 2024 from the list.
Discover more from Catch These Words
Subscribe to get the latest posts sent to your email.