diary

TURN Meeting No.14 Report (Part 1)

2021.10.5

TURN Project Management Staff

In TURN Meetings, guests from diverse backgrounds get together to talk and think about society and artistic expression going forward from the perspective of their respective “differences.” The 14th meeting was live streamed on Saturday, August 17 from STUDIO 302, 3331 Arts Chiyoda in Tokyo.

TURN Meetings have been streamed online since FY2020. So how are guests, staff members and participants from different backgrounds (involved in TURN events and projects) able to work together and communicate their intentions in the nerve-racking scenario of a live streaming event? This is something TURN staff members have been faced with in three online versions TURN Meetings over the past year, the implementation of which has involved plenty of trial and error along with a wealth of new discoveries.

For this event, we delved deeper into issues that have come to light through recent activities under the theme “the difficulty in communication.” Part 1 featured Satsuki Ayaya, who conducts* “Tojisha-Kenkyu” on autistic person**; Eri Ishikawa and Shizue Sazawa, both deaf ‘sign language navigators’ for previous TURN Meetings; and Yuko Setoguchi who acts as the “feeder” of signed information to the deaf navigators. In Part 2, guest Satsuki Ayaya was joined by our other guest, critic and radio personality Chiki Ogiue, who engaged in discussion together with TURN supervisor Katsuhiko Hibino and on-site Arts Council Tokyo staff members.

The livestream this time around included some unique and novel content, with discussions, focusing on the gaps in our communication. Here we report on the proceedings.

* Research conducted by someone who is directly affected by the specific subject of the research. In Japan it is predominantly used in the field of disability studies.

** “Autism spectrum disorder” is a condition which creates difficulty in social communication skills which meets medical diagnostic criteria. When Ayaya introduces herself, she drops the word “disorder” from the term “autism spectrum disorder” (which is often used in Japan) and expresses herself as being “autistic.”

■ Talking about the “gaps” in communication itself

Guests sitting in a circle talking during Part 1

The aim of TURN Meetings is to create and provide a highly accessible platform giving everybody the chance to join in with discussions and deepen their insights, regardless of differences in background or culture. As before, this online event involved multiple methods to convey information onscreen and in the studio, including sign interpreting, audio guides and subtitles.

However, since Japanese signing and the Japanese language involve specific linguistic systems and methods of expression that make it easy for users to understand, getting the two to work in tandem under the conditions of a livestream means that on occasion “translation” becomes difficult, and the intended meaning can’t be conveyed as you would like. Today’s TURN Meeting comprised two parts. Part 1 was aptly called “Issues and possibilities in live online streaming,” and the group discussed what had emerged from previous meetings with regard to this.

Of our guests, Ishikawa, Sazawa and Setoguchi have been involved “behind the scenes” at previous meetings. For each online TURN meeting, so as to deliver better on-screen sign language to deaf viewers, what is being spoken in the discussion is first converted into sign language by a hearing “feeder,” and then the deaf sign language navigator re-translates and relays it online in Japanese Sign Language, for deaf audiences to better understand the content. Ishikawa and Sazawa have acted as deaf sign language navigators and Setoguchi as the feeder. These three understand,experiencing the difficulties inherent in communication through their previous dealings with information accessibility practices in TURN Meetings.

Guests Eri Ishikawa (top left), Shizue Sazawa (top right), Yuko Setoguchi (bottom left), and the two sign interpreters (bottom right)

As somebody on the autistic, a type of developmental disorder, our other guest Satsuki Ayaya conducts research on her own body and senses. In a short presentation also serving as a self-introduction, Ayaya cast doubt on autism spectrum disorder being referred to as a “social communication disorder” under the medical diagnostic criteria. “For instance, in the case of deaf people, their condition might mean impaired communication in the world of vocalized conversation, but they can converse freely in the world of sign language, which in many cases means any impairment to communication ceases to exist. In other words, a communication disorder is not an individual attribute but something that occurs between people,” Ayaya said. She also pointed out that the autism too, rather than being split into “people with communication disorders” and “normal people,” should really be defined as “communication disorders” that occur between people with physical characteristics who conform to the culture and social rules shared by many (the majority) and various groups of people with physical characteristics and cultural backgrounds which make it difficult to conform (minorities).

Guest Satsuki Ayaya

In addition, Maria Hata, who is in charge of the TURN project at Arts Council Tokyo, participated in the discussion as moderator. Hata came up with the theme for this meeting, having experienced the difficulties of on-site coordination each time alongside Ishikawa, Sazawa and Setoguchi. “For the TURN Meetings, we’ve had various “interpreters” working as a team to form a relay of communication. But on occasion, communication between them has been difficult,” Hata said. Under these circumstances, she added, for this event focusing on misunderstanding itself she wanted to exchange dialog with other participants, staying conscious of the need to “mutually confirm” when they did not understand each other, rather than for the discussion to proceed without a hitch.

■ What makes “deaf interpreting” difficult?

In Part 1, everyone first checked video footage of previous meetings for inconsistencies in communication and translation. For example, the footage of Meetings No.12 and No.13 showed lags in each between speech in Japanese, on-screen subtitles and sign language.

The livestream screen for TURN Meeting No. 13

Sazawa and Ishikawa, who have acted as “deaf navigators” these meetings, pointed out the intrinsic difficulty of combining livestreaming with the task of “interpreting.”

Sazawa says “When I check the video there is definitely a part where the contents are out of sync. It’s ultimately down to the fact that it’s a livestream. In TURN Meetings the process of communication takes place in steps, from the feeder to the deaf navigator, so there’s always going to be a lag with the video. I felt this caused some problems with our facial expressions. Especially with recording medium like video, I think a solution would be to pre-record, which would improve the quality of interpreting.”

Guest Sazawa

Continuing on from this point “I’ve been involved with TURN from the start, and I can more or less understand the opinions given by regular speakers like supervisor Mr Hibino. But even so there are still times while I’m interpreting when I worry because I don’t understand what the speakers mean. I think it might be worth rethinking the significance of insisting on a live stream for TURN Meetings online” added Ishikawa.

Guest Ishikawa

Setoguchi said that the difficulties Sazawa and Ishikawa talked about regarding simultaneous interpreting also applied to hearing sign interpreters. “Simultaneous interpreting means that there are occasions when it is necessary to make a wild guess as to the meaning of a remark within the flow of time. The best approach is when you can check meaning while you are interpreting,” she said.

Setoguchi went on to explain that language can be divided into ones that belong to “high context cultures” where people read between the lines and surmise a lot from a small amount of information, and ones that belong to “low context cultures” which require clear and specific reference to content and subject, with Japanese belonging to the former and English and sign language belonging to the latter. Speaking from personal experience, Setoguchi added that the differences between languages and the abstract language peculiar to art also made simultaneous interpretation difficult.

Guest Setoguchi

Meanwhile Ayaya uses these two opposing languages on a daily basis. Ayaya, who says she had trouble vocalizing as a child, lives as a hearing person and understands the culture of hearing people, but found voiced conversation difficult and felt excluded as a result. Given these circumstances, at university she joined deaf students who had created a group to help make friends and address problems of information accessibility, and learnt to sign through this. However, there were some aspects of deaf people’s signing that she found difficult, and she had the sense that she “didn’t belong to either the hearing or deaf world”. signing this, Ayaya emphasized her position living between languages with different cultures.

What made a big impression on me during this event was how the participants frequently stopped the discussion if there was something they didn’t understand to check what something meant, in line with the “mutual confirmation” Hata mentioned at the start. In a normal discussion, most of the time you would hesitate to ask any questions or express discomfort, or somehow guess the meaning and convince yourself it might disrupt the progress of the discussion or feelings of others. In fact, there was a peculiar feeling of tension when the participants repeatedly checked each other’s intended meaning. But on the flip side, it was a time to be reminded that in fact there are many communication failures hidden in passing casual conversation.

When comments are being checked, the caption “undergoing checking” appeared in the upper right corner of the livestream screen.

■ What makes an ideal livestreaming online event? Participants shared their thoughts, followed by a look back at parts of previous meetings which seem to have “gone well.”

One of several clips chosen by Setoguchi was when drag queen Madame Bonjour JohnJ did a read-along from a picture book in TURN Meeting No.11. The “deaf navigator” for this part was Sazawa. Explains Setoguchi: “Sazawa herself ordinarily does work involving communicating the world of picture books to children and elderly people through sign language, so I really wanted her to handle this part. In fact, her sign interpreting for this section made me feel as if Sazawa was not communicating the text so much as the view of the world and emotion of the picture book itself.”

The livestream screen from TURN Meeting No. 11 (Sazawa is on the left, with Madame Bonjour JohnJ on the right).

To this Sazawa responded, “JohnJ is a really flamboyant and charming person so I put a lot of thought into how to represent her in my signing. Unless I expressed things in a way that matched JohnJ’s energy, it wouldn’t get across to deaf people. So I tried to get creative with that aspect.” At the same time she said, “I was able to give the signing a rhythmical quality partly because I knew the contents of the picture book beforehand. I think pre-recording would improve the quality of the interpreting even more.”

Having seen the same review footage, Ishikawa also pointed out that when JohnJ appears on-screen, the subtitles generated by the UDTalk speech-to-text app were not displayed correctly. She didn’t hold back in her response. “It irritated me because my work includes making corrections to UDTalk conversion errors. Personally, I find it very distressing that the errors remain as they are on video.”

Throughout the discussion, Ishikawa and Sazawa in particular repeatedly questioned why the TURN Meetings were livestreamed. “Signinterpreting doesn’t just mean turning speech into signing – how we interpret is also based on the context of deaf audience’s history, practices and habits. So really, the ideal would be to compare notes properly beforehand,” said Ishikawa.

Meanwhile Hata asked what improvements they thought could be made to the livestreams, to which Sazawa replied that the best way would be different approaches depending on the program and its components. “It became a talking point for example when broadcasts of the recent Olympic Games Tokyo 2020closing ceremony and the Paralympics opening ceremony were covered by deaf sign interpreters for the first time. Unlike the time lag you get with subtitles, the advantage was that people could enjoy the images while watching the sign interpreting in real time. It may be that the teamwork was good between the deaf interpreters and the “feeders” who use sign language to communicate speech and audio content to the deaf interpreter, and that they were able to chew over the materials in advance. But I did feel that for sections that were predominantly musical, it would have been better to use hearing interpreters who knew the music. In this way you can switch to the most suitable interpreter within one program.”

Setoguchi, who had been listening to the conversation, said that in the case of a deaf interpreter, it was true that making a recording was the most accurate method. “But as I understand it, TURN is an experimental space. As such there will be successes and failures, and failures are an important source of information.” She added, “I personally want people to know via TURN that “deaf navigators” and “deaf interpreters” exist,” remarking that in parallel to the significance of discussion on information accessibility such as the accuracy of interpreted content, it was significant in itself that the livestream made deaf navigators visible to viewers.

■ Sharing the burden with the people you collaborate with

What was memorable at this point was when Ayaya, who up until now had been using sign language, suddenly said, “I’m going to speak from now on,” something that put a new perspective on her standpoint of using two languages. In taking part in the discussion, Ayaya said that from among languages used by the interpreters in the studio, she felt that switching from speech to sign language felt closer to her own sensibilities than sign language to speech, but explained that she decided to speak at this point as she felt it would mean fewer gaps in comprehension.

Ayaya also suggested: “Working in real-time also involves constant and necessary modification. I feel that the way moderator Hata speaks has changed compared to when she first spoke at today’s event, and the way she is speaking now is easier to convert into sign language. In this sense, the importance of livestreaming also lies in the ‘adjustments’ made on the spot, both naturally and consciously.”

In response to this Ishikawa remarked, “A sign interpreter appearing in an onscreen ’wipe’ (a picture-in-picture box often used in Japanese broadcasts) might for example realize a mistake, but even if they are backed up by another interpreter, they can’t very well say “I need to correct the content.” If we could do our job more collaboratively by checking the meaning of something straight away with other interpreters and speakers or performers, we might have more of a sense that we are creating the event together.” She suggested that one effective way of improving this situation might be to “summarize the main discussion points on a whiteboard,” as we did in today’s discussion.

Writing key words on the whiteboard on the day. Moderator Maria Hata is on the right.

Setoguchi remarked that it was important to use the opportunity given by the proliferation of streaming culture under the pandemic to try different things. “Because when I recently heard someone say there was a link between ‘sufficiency’ and ‘accessibility,’ I thought how true this was. Having an accessible environment allows you to interact and share with different people. I think we have to embrace this feeling more than we do now.”

In relation to this, Ayaya voiced the opinion that it was important to be able to choose the information tools that suit you from multiple options. In Ayaya’s case for example, it is easy for her to understand things when speech and subtitles match, but she says when the timing is off, it feels like information overload for her, and she ends up confused. In this case, rather than create new methods of accessibility, you could say it is more important for a viewer in front of the screen to be able to choose their preferred medium from existing information sources.

In this way, everyone has their own methods and ways of understanding. Ayaya gave some examples – people who prefer to get information in short bursts rather than viewing something for a long time; people who understand better through repeated viewing; and people who dislike going to places like movie theaters and are more comfortable watching things at home. You could say that not just the number of languages but also the selectability and range of viewing environments are important from an accessibility perspective.

Hata wrapped up Part 1 with these words: “There is no single right answer. The important thing is to try things out in discussion with the people we work alongside.”

Sazawa, as the discussion is coming to an end, said it was the first time she had been able to talk so frankly. “Up until now, hearing people might have thought that because there was a sign interpreter, there wasn’t any problem in terms of accessibility to information through sign language. But to be honest being an interpreter is tough, and there isn’t any place to air your reservations. Through this event I’ve been able to share the burden that only interpreters have known. During the livestreaming, I was glad people noticed when the interpreting was tricky, and that we could all think about solutions together.”

Related Articles

© Arts Council Tokyo