Software
Software Development
Software projects in the assistive tech space can vary widely, though software-focused projects in the CRE[AT]E Challenge typically take one of four forms:
a website/webapp (that may be public, or just for your co-designer)
desktop (non-web) software
a mobile app
software for microcontroller units (MCUs: Arduino, Beagle Bone, etc) or single-board computers (SBCs: Raspberry Pi, NVIDIA Jetson, etc) to interface with sensors and actuators
We will refer to the above as web, desktop, mobile, and MCU/SBC for the purposes of this page.
In the sections below, version control and user interface / front-end development will be relevant to all types of development. After those sections, we will discuss some issues that are specific to the type of development that you are doing or are considering.
Version Control
For all categories, you should seriously consider the the use of version control, a software engineering process that lets you keep track of your changes and collaborate with others.
GitHub is a code repository, a place to put the code you're writing so you can track your changes and collaborate with others. You can host a website directly from GitHub-hosted code on GitHub Pages. GitHub also has a Student Developer Pack with access to lots of helpful resources. If you will be writing significant amounts of code, you should use GitHub for version control.
User Interface and Front-End Development
For all categories, you must consider the user interface - how the user will go about interacting with your product. This consideration exists regardless of whether your product has a visual interface or not. Some interactions that you design will be fairly contained between the software and the user, such as desktop-based software to help a co-designer practice a skill on the computer. Other interactions exist between the user, the software, and some other part of their environment, such as apps to remind the user to take medication, or to help with decide what clothes to wear.
For all categories of software, much like for other products, you must first figure out what the interaction will look like, what pages or functions the user is on at any given time, and how these might change during the course of use.
For categories of software that involve a visual user interface (most cases other than some MCU/SBC instances), you should pay particular focus to what the user interface is like, and how the user moves through it.
Figma is a popular application for creating framework prototype mockups of an app design - basically, you can easily create how an app on your phone or a website might look like or work like to the user, without all the features being built yet. Figma is free for students and educators. If you are prototyping a front-end, it's a good idea to start here.
Web Development
TODO - learning resources
TODO - simulating mobile viewports
Desktop Software Development
TODO - packaging - just because it runs on your computer, doesn't mean it runs on your co-designer's
Mobile Application Development
Mobile app development is generally split into Android and iOS development, depending on the ecosystem that your co-designer uses.
iOS development
At present, Apple keeps iOS development development behind significant barriers to entry. Your team must
1) own a Mac computer for anyone wanting to do development
2) pay for the Apple Developer Account ($99/year) on a per-user basis
3) be familiar with Swift or Objective C
Due to these barriers, we do not recommend that teams do iOS development for the Challenge unless you are willing the pay the costs above and are already familiar with the iOS development process. Especially if your team is new to computer programming, the sunk cost of the Apple Developer Account is likely not worth it.
Consider developing a webapp instead that your co-designer can reach from their phone.
Android development
Mobile development for Android is far more accessible. Android development can be done on any computer, using the Java or Kotlin languages. The Android Developers page has a useful guide to getting started.
CodingWithMitch is another resource with lots of tutorials and example projects to help you along.
Visual Programming Environment for Mobile Apps
MIT App Inventor is an intuitive, visual programming environment that allows everyone to build fully functional apps for Android phones, iPhones, and Android/iOS tablets (iOS apps will still need a developer license to deploy). There are a few examples on the About Us page that share some teams that used App Inventor to create assistive tech already.
What MIT App Inventor is good for -
Rapid Prototyping - App Inventor allows users to quickly create mobile apps, making it ideal for testing app ideas, creating rapid prototypes. These apps can range from simple utilities to more complex, interactive applications like games, social networking apps, and tools that interact with hardware or online services. Its drag-and-drop interface simplifies coding. Hence one does not need extensive coding skills to build simple apps.
IoT and Hardware Integration - It can be used in projects that involve Internet of Things (IoT) devices, such as integrating mobile apps with hardware (like Arduino or Raspberry Pi) for sensor data monitoring, home automation, or remote control of devices. Although, there are also other platforms like ThingSpeak that can be used in such projects. MIT App Inventor would be a better choice if you want to specifically make an Android app (and not a website or web based application).
Useful Links -
Software Accessibility Considerations
When designing for accessibility, it's important to keep a few principles in mind.
Screen-reader accessibility is the ability for screen reading software to read information to visually impaired people. This is important for all web and software with text. As part of screen-reader accessibility, you should check that the images you include in your software or web pages have alt text that provides a reasonable description of the image that can be read by a screen reader.
Color and contrast contributes significantly to visual interface accessibility, both for low-vision and for colorblind individuals. One useful article is here from Adobe. And a useful tool for generating color palettes with contrast is https://www.randoma11y.com/.
One such example of existing software that helps to check if your website is in compliance is accessiBe's accessScan.
The Firefox web browser also has a built-in colorblindness simulator as part of its Accessibility Inspector so you can check your websites and other browser-based software.
AI and Machine Learning Tools for Accessibility
Artificial intelligence (AI) and machine learning (ML) are increasingly popular tools for student projects in the Challenge.
This is a big topic, and we'll split it up into a few broad parts, some that are general to the field, and some that are specific considerations for a disability-related use case.
First, let's take a look at the "machine learning life cycle," or the series of decisions that are made in a machine learning project, from initial conception to after it has been deployed.
TODO - other ML resources
HuggingFace is a common platform from which to get both trained machine learning models and datasets. Check their documentation page for more information on getting started, depending on what you need from them.
Data Bias and Disability
In addition to the typical considerations for ML, in the assistive tech space, you also need to consider whether dataset biases will work against you or not, and how much. For example, the problem of emotion recognition aid or training using computer vision has come up repeatedly in the AT space, as a tool for people with autism spectrum disorder or other types of neurodivergence. There are two ways such tools could be used: one where computer vision is used to recognize the emotions of neurotypical individuals, and the other where it is used to recognize the emotions of neurodivergent individuals. One of these will work much better than the other with traditional datasets! Remember that public datasets collected from people will very rarely contain enough data from disabled populations to be useful in cases where the disability strongly affects the input of the model.
This bias means that if the input to the model is something out in the environment (e.g. household objects), or something else that is not affected by your co-designer's disability, then public datasets may be fine. However, if the input to the model is something from your co-designer (facial expressions, vocalizations, etc) that is affected by their disability, you may need to fine-tune your model with data from your co-designer for it to work. Similarly, for very specific tasks (such as visually identifying Uno cards), you will also need to collect data to fine-tune your model. The OpTECHs got their school involved in sending in data to help them train their model, something you might consider as well, if it's appropriate!
Training Models
As a rule of thumb, don't train models from scratch unless you have to, or want to go through the process to learn it. If a model already exists, you should probably just try to get it and use it instead (see HuggingFace note, above).
For fine-tuning (and more general ML training) cases, you'll need to collect and label your datasets. You can do so manually on your own computer, but there are also online platforms such as CVAT and Teachable Machine that provide ways for you to annotate the datasets more easily using a graphical user interface and automated workflows.
Other Resources
If you are using App Inventor, there is also a page with example AI projects.