You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Could it be possible to make OpenTTD accessible to a larger base of players by implementing support for assistive technologies?
OpenTTD is a visual game, it relies on graphics and text in complex relationships to communicate meaning, and it's unlikely that it would be possible to make it playable by near-blind people. That won't be the focus here.
However, people who have decent vision, but may not be able to operate the game with mouse, keyboard or touch screen are basically barred entirely. It's possible to perform some actions in the game using keyboard only, but most of the core functionality requires dexterous use of a mouse, with fine control of the position of clicks, and precise click-and-drag operations. We should consider offering some alternative ways of controlling the game.
There are also people who may have some vision, but have trouble reading text or numbers, or tell apart icons with minute differences. It could be an advantage to be able to have these things narrated.
Technology
All the main desktop environments OpenTTD runs under support some kind of API for assistive technologies. That would let us expose the user interface of the game such that it could be controlled by alternate means, such as voice control, and let screen elements be narrated.
In general, assistive technologies requires the application to "explain itself", by describing the available user interface in a tree structure. Each element of the tree is a container of elements, an element that contains text or pictures, or an element the user can interact with in one or more ways. Typical user interface has elements such as menus, windows, buttons, input fields, and lists. The main issue with OpenTTD as a game is that we also have a game world shown in viewports, let's save those for later.
The challenge here would in part be integrating these three (or more) different APIs with our internal UI system, and in part be designing the general interaction model overall.
Integrating the APIs also has two challenges: One would be to avoid the spaghetti of #ifdef everywhere and instead somehow make a general description that calls the appropriate system API. The other would be more fundamental in re-working large parts of the UI that simply isn't structured. For example, the concepts of "a list of strings" or "a table of data" are re-implemented in several places, in slightly different ways. A long time ago there was a bit of work put into making the UI more modular and have more reusable controls, but that never quite happened, that work would have to be resumed and completed first.
Accessing the game world
Now to the elephant in the room: How do you select a tile in the game world when you can't use a pointing device or touch screen, and just as important, how do you draw out a length of railroad track?
One way voice input systems offer mouse control is to split the screen into a 3x3 grid, and have the user speak which tile of the grid to focus on, which then nets a new grid subdividing the chosen tile. One idea would be to offer that, but with a slight modification: When further subdivision either stops making sense, or the user chooses so, stop subdividing the viewport, and instead start numbering the visible tiles. Then have the user speak the number of a tile to select that. (For example, "select tile, 3, 7, ok, 11, ok.")
Of course, make sure to keep context, such that the user wouldn't have to keep re-selecting the tile they just left off at when building sequential sections of track, and so on. It might also be possible to implement some extra context selection commands, such as going back to the start of the previous built segment, or the middle, etc.
Now, for building track, it may be worth looking towards the Roller Coaster Tycoon family. Either straight up implementing the piecewise building, or alternatively offer a kind of dragging-like experience, where you choose a cardinal direction to build in, and how long to build for. (Such as "build track, north-east, 20, north, 3, west, 4.")
Details like building diagonal track on the left/right or top/bottom half of a tile, and how much road to build in a tile, would need some extra considering, as would special controls for building more complex junctions where you may want to manually place multiple track pieces in each tile.
One thing that came up when I briefly mentioned this elsewhere was also that it may make sense to have a mode that auto-builds signals along rail track as you build the rail, instead of it being something you do afterwards.
Conclusion
This is all just wild ideas right now. Implementing accessibility would be a huge undertaking and require a lot of work on the game's UI in general. Would it be worth it? Who knows. Transport Tycoon is an old game and the people who play OpenTTD out of nostalgia is not an insignificant part of the player base, but it may also be a part of the player base that will sooner than other existing players need a little help controlling the game or reading the figures. And who knows how many people are out there who wish they could play but just aren't able to because they can't use a mouse? I think it would be effort well given out.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Rationale
Could it be possible to make OpenTTD accessible to a larger base of players by implementing support for assistive technologies?
OpenTTD is a visual game, it relies on graphics and text in complex relationships to communicate meaning, and it's unlikely that it would be possible to make it playable by near-blind people. That won't be the focus here.
However, people who have decent vision, but may not be able to operate the game with mouse, keyboard or touch screen are basically barred entirely. It's possible to perform some actions in the game using keyboard only, but most of the core functionality requires dexterous use of a mouse, with fine control of the position of clicks, and precise click-and-drag operations. We should consider offering some alternative ways of controlling the game.
There are also people who may have some vision, but have trouble reading text or numbers, or tell apart icons with minute differences. It could be an advantage to be able to have these things narrated.
Technology
All the main desktop environments OpenTTD runs under support some kind of API for assistive technologies. That would let us expose the user interface of the game such that it could be controlled by alternate means, such as voice control, and let screen elements be narrated.
In general, assistive technologies requires the application to "explain itself", by describing the available user interface in a tree structure. Each element of the tree is a container of elements, an element that contains text or pictures, or an element the user can interact with in one or more ways. Typical user interface has elements such as menus, windows, buttons, input fields, and lists. The main issue with OpenTTD as a game is that we also have a game world shown in viewports, let's save those for later.
On Windows, the API is called UI Automation, documentation is available here: https://docs.microsoft.com/en-us/windows/win32/winauto/entry-uiautocore-overview
On macOS, you implement the NSAccessibilityProtocol, documented here: https://developer.apple.com/documentation/appkit/accessibility_for_appkit
On GNOME, the API is called ATK, documentation is available here: https://gnome.pages.gitlab.gnome.org/atk/
The challenge here would in part be integrating these three (or more) different APIs with our internal UI system, and in part be designing the general interaction model overall.
Integrating the APIs also has two challenges: One would be to avoid the spaghetti of
#ifdef
everywhere and instead somehow make a general description that calls the appropriate system API. The other would be more fundamental in re-working large parts of the UI that simply isn't structured. For example, the concepts of "a list of strings" or "a table of data" are re-implemented in several places, in slightly different ways. A long time ago there was a bit of work put into making the UI more modular and have more reusable controls, but that never quite happened, that work would have to be resumed and completed first.Accessing the game world
Now to the elephant in the room: How do you select a tile in the game world when you can't use a pointing device or touch screen, and just as important, how do you draw out a length of railroad track?
One way voice input systems offer mouse control is to split the screen into a 3x3 grid, and have the user speak which tile of the grid to focus on, which then nets a new grid subdividing the chosen tile. One idea would be to offer that, but with a slight modification: When further subdivision either stops making sense, or the user chooses so, stop subdividing the viewport, and instead start numbering the visible tiles. Then have the user speak the number of a tile to select that. (For example, "select tile, 3, 7, ok, 11, ok.")
Of course, make sure to keep context, such that the user wouldn't have to keep re-selecting the tile they just left off at when building sequential sections of track, and so on. It might also be possible to implement some extra context selection commands, such as going back to the start of the previous built segment, or the middle, etc.
Now, for building track, it may be worth looking towards the Roller Coaster Tycoon family. Either straight up implementing the piecewise building, or alternatively offer a kind of dragging-like experience, where you choose a cardinal direction to build in, and how long to build for. (Such as "build track, north-east, 20, north, 3, west, 4.")
Details like building diagonal track on the left/right or top/bottom half of a tile, and how much road to build in a tile, would need some extra considering, as would special controls for building more complex junctions where you may want to manually place multiple track pieces in each tile.
One thing that came up when I briefly mentioned this elsewhere was also that it may make sense to have a mode that auto-builds signals along rail track as you build the rail, instead of it being something you do afterwards.
Conclusion
This is all just wild ideas right now. Implementing accessibility would be a huge undertaking and require a lot of work on the game's UI in general. Would it be worth it? Who knows. Transport Tycoon is an old game and the people who play OpenTTD out of nostalgia is not an insignificant part of the player base, but it may also be a part of the player base that will sooner than other existing players need a little help controlling the game or reading the figures. And who knows how many people are out there who wish they could play but just aren't able to because they can't use a mouse? I think it would be effort well given out.
Beta Was this translation helpful? Give feedback.
All reactions