Hello again! I am back to writing and posting thoughts on science education as a non-sighted experience. I didn’t give up on it after my first try in July last summer but had to take a bit of break. In any case, I am not the “giving up things” kind of person. I eventually want to, and also have been told a few times that I should, write a book. On what theme, in what language I haven’t quite decided yet, but it is on the to do list. So blogging is a sense (not that SENSE), is a trial to have a feel what writing feels like.
I liked it when started SENSe 1.0, but always spent too much time thinking on how to phrase my thoughts. Amount of time which made most post a bit artificial, they were not too spontaneous. So now, in this new addition, I’ll try to think less while writing, and simply project my chain of thoughts from that particular day or week. Of course that does not mean I will write all kind of rubbish without thinking but surely you get what I mean. If not, comments or the “unfollow” option is there for you.
Also, since the last blogpost, I’ve got a new machine to type on, which makes reading and writing, as well as posting more convenient, so that should help delivering my messages, questions, comments, insights to various aspects on education and sight loss. What didn’t help on the other hand, is that during the past half year, my way of seeing fundamental science and its teaching and researching has changed. Not only that but just as well, my understanding of what I could do with my life, how I could make real change, help others if they need it or how we could benefit the most from academic research and the dirtier commercial, business world has changed a lot. Thus, I thought I need to clear things in my mind first, only than write about them. Beyond all, I explored and learnt about new fields of science, new ideas I never heard of before, which all made me highly passionate about those. It isn’t a secret, I am talking about sensory substitution and tangible computing mainly. But more about these in a more specific post.
Now I am only writing to say hello again and get back to writing. I know there used to be a being we call Jesus in a particular language, in a particular culture and religion, and it only took him to get back on track in 3 days, and not 7 months or so as I happen to do, but hey ho, at the end it is Easter! Hence SENSE is back.
With varying regularity I am going to share some thoughts, some progress around our R&D projects, thoughts on other people’s work in context of science education and sight, PR work, business work, our new services and products, websites, news and anything else that might interest me.
I just came across the following call for a conference in 2016. The text speaks for itself. It is good to see that more and more conferences as such are out there. Hopefully we will see significant and positive consequences as well on accessibility of STEM resources and literature.
A significant number of online journals and e-textbooks, as well as other forms of academic/educational information are now digitized for various purposes. Computerized processing of such information is also being actively studied. For instance, in Japan, so-called “digital textbooks” (the official name of an e-textbook in Japan) are supposed to be fully adopted in elementary and junior-high school by April, 2020. Digitization is certainly a keyword in contemporary society, in which “Knowledge” forms its foundation.
However, digitization of STEM (science, technology, engineering and
mathematics) contents and its applications have their own problems, different from those in a non-technical field. For example, while many online journals are provided in PDF (Acrobat Portable Document Format), a mathematical formula in such documents has no semantic representation. It is only depictedvisually as a set of characters/symbols in a two- dimensional layout or an image. Thus, we cannot use it for further processing such as searching the mathematical expression in PDF or reading it out with a screen reader.
We believe that one of the most serious problems is the poor accessibility of digitized STEM contents. In many cases, it is hard for print-disabled people to read/author those contents although their accessibility is definitely important to achieve a truly inclusive society. In terms of a non-technical document, even if the original one is inaccessible, we can usually easily convert it into an accessible representation with standard tools such as OCR (optical character recognition) technology; however, as far as STEM contents are concerned, we cannot at all.
Since a STEM document includes many technical characters, symbols and notations such as mathematical expressions, chemical formulas, figures, tables, etc., its conversion into accessible format
such as LaTeX or MathML is quite beyond the capabilities of the standard tools. In addition, to establish a STEM-accessibility environment in education, digital libraries or others, not only conversion tools but also many other systems such as authoring tools and accessible players should be improved/developed.
The 3rd International Workshop on “Digitization and E-Inclusion in Mathematics and Science 2016” (DEIMS2016), which is supported by Nihon University, Junior College Funabashi Campus and the Not-for-Profit Organization: Science Accessibility Net, will bring together experts from around the world to present and discuss state-of-the-art technologies (research and development), novel activities and future perspectives on digitization/computerized- processing of STEM contents, their applications and accessibility.
A similar workshop was held in Fukuoka, December 2009 and in Tokyo, February, 2012. DEIMS2016 is planned to be the third one of this series of international workshops. For more than 15 years, the Infty Project, (The primary organisers of this workshop) has also organized a series of annual domestic workshops on scientific- information processing and its accessibility. DEIMS16 is also regarded as one of this series.
To view information on the previous international workshops in this series, please visit the DEIMS2012 home page:
http://workshop.sciaccess.net/DEIMS2012 and the WEIMS2009 home page:
the main topics of the workshop will include, but not be limited to,
– Digitizing and processing printed scientific documents
(technology / practice)
– Digital mathematical / scientific libraries
– Accessibility of scientific documents (e.g. PDF to DAISY: Digital
Accessible Information System)
– Accessibility of tables and figures
– Accessible e-books, in particular, accessible e-textbooks
– E-learning, online services in scientific fields
– Assistive tools for disabled people
– Services and technologies for inclusive environments
– Higher education for disabled students
– Other related topics.
Prof. Dr. Volker Sorge
The University of Birmingham, UK.
Prof. Sorge will present a lecture on his amazing work which has
greatly increased the accessibility of chemistry. His talk will
include a description of the new OCR system for chemical
formulas developed by his team.
I’ve been thinking about touch and
what extra it gives us people. Do we need touch at all? If yes, and most probably we do, why is it so?My thinking
follows like this. We do need tactile
sensation, it gives an extra dimension to the flow of information we receive. I am not qualified to talk about
the psychology of touch and what biochemical processes are going on but one thing I know for certain is that
looking at a photo of a family member, viewing your lover through a computer screen is nothing like actually
hugging them, holding hands, or even just shake hands with somebody you respect. I could probably scribble
together a paragraph on how oxytocin hormones
are released when executing a touch, but for the moment let’s just assert we require and benefit from tactile
information. It is often easier to grab things and replace them physically than doing it virtually with the help
of a cursor or similar. Often it is proving to be more helpful to explain a topological concept using a 3D shape
rather than a 2D drawing.
If touch has its own specific features we take advantage of, surely all our senses can provide with something
unique, some sort of information, a set of knowledge other senses fail to deliver as quickly, as detailed, as
memorable or deliver at all. Take for instance taste. Even though a food composition
straight out of the fridge might look perfectly fine, tasting it might rapidly reveal the lunch is not as healthy
any longer as anticipated initially. This is probably where taste masters the
most and has the greatest comparative advantage respect to other senses.
As an other example take hearing. It turns out I was
watching a movie with my family a few years ago during Christmas in
the living room. While enjoying the film, we noticed a cracking
type of noise from the kitchen. It was the Advent wreath which
caught fire from one of the four candles. There was no way we could
have noticed it by any other means but hearing, since the partial
wall covered the sight of the Christmas decoration. So hearing gets
a plus there. I could also mention the example of a fire alarm
going off during the night when people are likely to sleep and not
keep an eye on lamps or similar alarming notices.
Let’s shift to smell. Imagine you are listening to music
while cooking and doing some of the washing up. If the oven was
just opposite side to the sink, as it is in my flat at the moment,
you wouldn’t see, nor you would hear anything if the dinner you are
preparing in the hot cupboard would decide to burn. However, you
might be able to smell the smoke and act accordingly. At the same
instance we could probably list similar benefits to smell as I did
considering touch or taste. It feels nice to sniff a pleasing
perfume, as well as it can be alarming of something unhealthy.
Finally the grand master, vision is back. Yet again, vision seems
to be the ultimate sense we use the most, we use the most
efficiently. However, as described above, sight can fail
sometimes. Hence I came to the conclusion we need to segment
information into subsets and assign each subset into the modality
it is best conveyed and processed. Let’s stick to the example of a
scientific graph. For instance a Gauss distribution of data plots
with some scattering and noise. Now let us exclude the most
conventional way of interpreting graphs and diagrams i.e. through
visual perception. Instead I promote looking at the
remaining information modalities. I must admit it is even difficult
for me to imagine tasting or smelling a graph but touching, hearing
and verbalising should ring the bell. Some people would say why
don’t we convert all the information into sounds? Some would ask
why don’t we turn everything into tactile formats?Yet an other
group of people might think, why don’t we describe the graph
My point of view is that why wouldn’t we do all of that at the same time. A description could give a nice overview
of what can be seen but nothing guarantees the description gives account on all features a user might want to know
about. So we should give the opportunity to explore. A general trend of the Gaussian curve could be well
demonstrated using a tactile curve additionally to tactile axes. However, considering outliers sound scanning
would be more informative most certainly. Similarly, extracting text labels and importing them into text to speech
should deliver a faster transmission of information than braille cells for example. Thus, splitting a coherent set
of information into subsets and process each of these information bits individually via the appropriate output
modality is something we might need to consider as an idea seriously when trying to overcome a sensory loss.
The title is somewhat misleading I must admit. There is nothing wrong with 3D printing as a way of
additive manufacturing, creating new things, reshaping raw materials. Really, 3D printing is great, me and few of my colleagues actually have a chance to use the technology at a given stage of our research work and admittedly it is a great support to us. However, the question in the title is the one which helped me to understand and find an answer I
was looking for a considerable amount of time. I am heavily involved in a research aiming to develop a
tactile graphics display, which I definitely will introduce at this blog at a later date, but I find it
to be a smarter choice of postponing the introduction until our new website is ready to be published.
While editing our research principles and ideologies page, I was dwelling a lot on how to phrase the
obvious thoughts I, we the research team have, in such a way that it is more or less clear what we want
to achieve, what our objectives are. I tried to retell the speculations around inclusive vs. assistive technology, refer to the multi-modal nature of the device, bring up sensory substitution and also make
sure that not only blind or visually impaired people will benefit from the new technology; but sighted
users are going to feel the advantages of accessing information at a deeper level, using various senses
to perceive, process and interpret the environment around us. Trying to desperately classify the
technology we are about to produce I initially thought I shall make an assimilation with 3D printing.
Probably I would have said something like
“Our technology will teach people how to be less vision dependent and enable a revolutionary way of
accessing information in approaches different from how we see the world recently. The tactile graphics
displays will open up new ways of interacting with information, similarly to 3D printing.”
Instead of writing the idea down and moving on to the next ambitious paragraph detailing how amazing we
are and how fantastic our project is, I thought there are more useful things to do, so I kept thinking
about the forth mentioned statement. It was then I remembered a programme I saw recently on BBC on 3D
printing. Clearly the show made the point that the technology is brilliant, but still what is wrong with
it? Why do we not have 3D printers in every household?
The episode gave a fairly detailed account on the major causes of the slow spread of these devices. Most
of these reasons were the traditional quality-to-price ratio, steep learning curve of operation, the
dirt these devices can make occasionally or the temperature at which they operate. These reasons are
undoubtedly correct but I started to wonder whether there is not a more profound source of not using 3D
printers as widely as we could do.
My response to that insight happens to be somewhat abstract, but what if we are simply not yet ready for
a transition? A transition between viewing graphics in two dimensions on a screen and actually being able to build, grab and touch the three dimensional model of what we only saw on a display a few hours ago. 3D movies
are to some extent are available but still it is not as much of a turning point in media as it possibly
could be. Perhaps our brain is smarter than that. May be we enjoy seeing 3D shapes coming out of the
wide screen in the cinema momentarily, but is it not possible our conscious says “this is not real, just
virtual images with dept but I can’t touch them, I therefore won’t accept it as anything real, no matter
how realistic a scene looks.”? May be we took a too big step because technology allowed us but mentally
speaking we are not quite ready to produce anything that appears on a screen. In my opinion we might
need to introduce an intermediate step between 2D and 3D.
Thus, how I would probably define the device we are about to create, is in terms of a 2.5D display. Why
not give people the opportunity to raise shapes on the screen, let them touch, feel its texture, explore
its tactile response for a few years. Once they got used to the new features, new technology, new
sensation, they will want more, people will be ready to accept more. We always want more… We always
ask for more than what we have but if suddenly we are given too much new information, too many new
abilities, we feel uncomfortable with all the newly acquired knowledge. So a 2.5D display would serve ideally as a transition from visual screens to 3D printed objects. It would ensure comfortable shift toward
touching what we see, satisfy our needs for more and still, give enough time to be prepared for creating
things out of the blue with a technology that is just around the corner.
As I was reading through the weekly product of some of the mailing lists I am signed up for, a news letter sent by the National Federation of the Blind caught my attention. More precisely a statement quoted from the NFB’s president at a recent talk is what raised my curiosity the most. The assertion goes like this:
Low expectations stand between blind and sighted people, and not the blindness.
Now it is not entirely clear to me what the president of the organisation referred to. Is it the low expectation of blind people respect to the quality of the assistive technology made for the blind community, is it the low expectation of sighted people respect to the abilities of what their non-sighted peers can accomplish, or perhaps it is something completely different.
For the time being, let’s leave prejudice aside, and instead concentrate on technology and the blind point of view as most of the time this blog is going to do. Yesterday I participated in an interview set up by a sensory substitution researcher at the University of Sussex. We had a conversation about a number of things, including life before and after blindness, evaluated 5 different assistive tools, 3 sensory substitution tools they are developing at the moment, and also rated each of these devices in terms of usability. We explored aid utilities all together supporting navigation, everyday tasks such as reading, cooking, communication and of course spent a small amount of time on science accessibility.
Before heading on, here is a BBC Click episode introducing a sensory substitution device being developed in the School of Psychology at the University of Sussex.
BBC Click 23 05 2015 8-12 min
One spot of the interview took a fascinating direction of thoughts. I was trying out a tablet device which converts colours to sounds. Colour has always been a difficult question when considering blindness. One might ask, just as my interviewer did, how useful such a tablet would be? My response to these type of questions is usually that colour-to-audio transcription has many beneficial aspects for those who are unable to distinguish segments of the visible electromagnetic spectrum. It is enough to think about coloured pie charts or three different curves plotted on a graph with varying colour labels. Colours very often have a significant functionality. We don’t even need to think about abstract concepts such as the pie chart but simply consider the scenario of a man walking down the corridor. It is the colours that help the man to find the door he is looking for. The yellow rectangle within the white ribbon next to him. Of course this is an idealised situation, since in real life we have many clues to help us, such as office labels, handles, door frames and so on. Colours don’t only function as navigation aids in finding certain units standing out of a larger element but colours also have very strong symbolic connotations, for instance national flags, traffic lights, symbolism attributed to black or white etc. Without colours we would lose many of our skills very quickly. However, colours don’t only have functions, they also have esthetic values. Most people tend to agree that some colours are calming, others are depressing, other ones are motivating. I can go further than that, just think about arts, especially visual arts. We don’t appreciate paintings because they are useful but because they are visually appealing.
So when it comes to converting graphics and colours to sounds, we need to ask whether we want to convey the information carried only, just the beauty or both at the same time. Do we want to offer visually impaired, colour blind, non-sighted people only the functional factors of colour or the esthetic component of it as well? Do blind people need the esthetic features? Should they be satisfied with accessing useful information and put aside their other needs? When looking for the answers of the above proposed questions, in fact we are exploring the questions of sensory augmentation and sensory substitution. Do we require the extension of our hearing in a manner that we are going to be able to hear colours as their frequency equivalence and match colour names with pitch values, in which case we are more likely to talk about sensory substitution, or we strive to hear the full range of characteristics a colour can provide, possibly with more conscious understanding than natural sight can serve with, in which case we can consider basic sensory augmentation. This is a matter of choice and expectations on the first place.
The idea of functionality vs. esthetics can be generalised for not only colours but a wide range of other things, for example assistive technology. Do we want speech synthesis to only read a book for us doing the job it was made for, or do we have higher standards and wish the synthetic voice to imitate dialogs, emotions, speak clearly and not like a 60s sci-fi robot?Do we want many devices doing various independent tasks, with no appealing design at all, or we prefer a fashionable gadget which can do multiple tasks adjusted to individual needs? Do we really want to carry braille transcribed magazines as giant books instead of air light paper copies or tablets? Most of these and similar issues are already addressed and with enough attention and effort we can hopefully achieve good progress. However, wouldn’t it be desirable to have non-visual access to advantages a blackboard has, a pen and paper can do, a sketch-pad can offer when it comes to noting equations down in an array, rearranging parts of these equations, sketch graphs on the fly without major time investment in plotting a tactile figure? Do we want to teach disabled kids only the pure and dry facts, mathematical tools or shall we show them all the exciting phenomena in nature, funny and easy to remember, outreach like experiments, demonstrations?
So back to where we started from, the low expectations. Do blind people expect only to be able to have access to science education up to a certain level, enough to be able to learn a certain field of science and become a professional, or is it a fair expectation to ensure a joyful and equally comfortable way of acquiring scientific knowledge and experience?
I shall emphasise at this point that although I have a solid opinion on most of the questions asked here, I don’t necessarily know the correct answers, if there are any at all. Though I don’t have exact solutions yet, sometimes it is more difficult and even more essential to ask the appropriate questions.
I wish I could communicate my thoughts on science as good as the post I came across not long ago does. Though the issue is not related to accessibility of science, I couldn’t possibly find better words why we need to do science and boost its “popularity” at all levels.
I saw an interesting article in Nature the opening paragraph of which reads:
The past few years have seen a slew of announcements of major discoveries in particle astrophysics and cosmology. The list includes faster-than-light neutrinos; dark-matter particles producing γ-rays; X-rays scattering off nuclei underground; and even evidence in the cosmic microwave background for gravitational waves caused by the rapid inflation of the early Universe. Most of these turned out to be false alarms; and in my view, that is the probable fate of the rest.
The piece goes on to berate physicists for being too trigger-happy in claiming discoveries, the BICEP2 fiasco being a prime example. I agree that this is a problem, but it goes fare beyond physics. In fact its endemic throughout science. A major cause of it is abuse of statistical reasoning.
Anyway, I thought I’d take the opportunity to re-iterate why I statistics and statistical…
View original post 1,625 more words