Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Ever wanted to look at yourself in Braille? (github.com/nishantjoshi00)
26 points by cat-whisperer 16 hours ago | hide | past | favorite | 13 comments




Are there blind users of hackernews here that could answer to the probably stupid question:

Would you be able to "perceive" a picture if that picture was engraved on a surface ?


I've been blind since birth. When it comes to 2d things such as linear and quadratic graphs, shapes such as triangles, circles, squares, etc, I had no issues when the material was provided using braille graphics. I can't comprehend representing a 3d object in two dimensions. When I was in college I switched from Computer Science to Telecommunications the second time I failed calc ii. I just couldn't comprehend rotating a shape around the access of a graph to get a 3d shape. This may be something solvable by 3d printing, but that was not easily available when I was in college.

Thanks a lot for your answer !

Do you find the concept of perspective to be totally obscure ?


Not blind, and can't speak to how popular or useful they are, but there are products meant to be used like that [0]. I can't find the link but I've also seen this done with paintings, where someone creates essentially a sculpture based on a painting, and then they can 3D print it so a blind person could "see" something like the Mona Lisa or Starry Night.

A while ago I read a biography of Louis Braille, and he created his system to replace an older one where they would teach people to feel the shape of letters in wooden blocks. Braille replaced it because it was much easier to read fast, but it was never meant to be used for something like a picture.

I'd also be interested if something like a tactile floor plan would even be useful for someone blind from birth, from what I've heard you don't think about navigating spaces the same way, so a floor plan might be far away from the mental models they use.

[0]: https://evengrounds.com/services/tactile-3d-printed-models-f...


Sometimes I draw UML-like diagrams when I join a project (and when the project is big enough in such a way my mind melts if I try to keep track of everything), I wonder if there are equivalent representations of such things.

Linear text is perfect to me for documentation, teaching/learning etc...

But also, systems seems to be better digested under the shape of spatial representations (I met a lot of CS persons that fantasized over the possibility of displaying all the files of your codebase in a VR-like environment augmented with visual cues (UML) and I must admit that I would find this unnecessary but comfortable -- and I can imagine applications of this in other domains; imagine a teacher arranging the whole of Kant philosophy as a spatial-visual system referencing positions, comments, etc..). Eyes are cool because you can focus on something while knowing that some available information is there around the zone you are focusing; in a sense, so is the hand, locally, but I imagine (I dont know) it would require some super-human level of braille reading to be able to jump back and forth reading on different fingers, so that's again a probably stupid question to ask to the blind crowd of hn : are you able to do this?



I’m not blind, but if carefully tailored pictures can be useful, certainly for charts and graphs.

See https://en.wikipedia.org/wiki/Tactile_graphic, https://data.europa.eu/apps/data-visualisation-guide/tactile...

Given the low information density of tactile graphics (eyes can resolve finer details than fingers, so braille letters are large, dithering isn’t useful in small areas, etc), it’s even more important to know what you want to show in an image, though, so that you can leave out the rest.


This does not look like Braille to me. Braille is a system that uses cells composed of six (or eight) dots. This is just dots strewn all over the place.

Reminds me of this relatively new device in the space though: https://store.humanware.com/hca/monarch-the-1st-dynamic-tact...


OP here - So the way we rendering this we are using the braille characters that are available in the character set on the terminal.

That's interesting but not clear at all in the readme. Do you choose specific characters for their shade/coverage of each dithering cell?

This comment somehow comes off as obtuse.

Still, I'm confused as to what you're mentally expecting.


Here's my more-or-less decade old “image to braille converter”: https://max-m.github.io/misc/braille/index.html

The source code is unminified and unobfuscated.

Another somewhat similar toy is https://max-m.github.io/InstaECB/index.html :)


stupid. maybe post something you actually made instead of telling a computer to make.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: