Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Apple ResearchKit 2.0 – Here’s What To Look Out For

Apple recently announced an all-new upgraded version of its Researchkit framework, a handy tool for iOS application development aimed at creating mobile apps to carry out research studies. With UI and performance enhancements, there are also a host of other updates including support for many new ‘Active Tasks’ and utility tools such as support for documentation and GitHub community updates. Apple ResearchKit 2.0 sure looks promising for researchers to try the next new wave of innovative data-collection and participant engagement techniques using the mobile device medium.

Let’s take a quick look at all the new upgrades made to the framework.

1. UI Updates

ResearchKit UI has been updated to match with the look and feel of the latest iOS versions. Updates across modules include bold titles and bold fonts as well as left alignment for all text. The UI views also now have anchored footers at the bottom with Cancel and Skip buttons relocated under the Continue/Next button to allow for easier navigation. Additionally, a new card view defines the look of forms and surveys. The UI enhancements will help participants navigate and fill out study screens in a more intuitive and easy manner and may prove to be especially convenient for some cohorts, for example, the elderly.

2. A PDF Viewer

ResearchKit 2.0 provides a PDF viewer for users to quickly search, annotate, view and navigate PPDF documents. This is likely to be widely used as dispensing reading material/resources to participants is a popular method of participant engagement in studies, and this was functionality that developers had to build out themselves previously, without assistance from the framework. Definitely a score upwards for ResearchKit!

3. The Speech Recognition Active Task

This is a newly introduced active task that captures audio input from the study participant and transcribes it. The speech recognition task allows developers to present either an image for participants to describe or a block of text for them to repeat. The ensuing audio input is recorded from the participant and the transcription will be available for them to edit if they find anything was incorrectly transcribed. The task generates results that include the audio recorded, the transcribed output from the speech-to-text conversion engine, and the edited or participant-approved transcription.

4. The Tone Audiometry Task

This is an enhancement to the tone audiometry task that was already available with ResearchKit framework. It includes an updated algorithm and implementation to better evaluate a user’s hearing. The tones will decrease in dBHL until a user fails to repeat the same, and then again increases until a successful attempt is made. With Airpod-specific calibration and usage of the Hughson Westlake method for determining the hearing threshold level of a user in the dB HL scale, this enhancement has surely upped the utility quotient for the Tone Audiometry task.

5. The Speech-in-Noise task

The Speech in Noise test is one more task that can be used to measure the hearing health of users. The test involves making participants listen to a recording that includes background or ambient noise. The participant will be asked to repeat back the phrase using the Speech Recognition task. Throughout the task, the ambient background noise will increase or decrease in levels, thereby allowing you to measure the Speech Reception Threshold (SRT) of the participant. More audio files are expected to get added to the repository in the upcoming months.

6. The Amster Grid task

The one helps collect data about a user’s vision. Participants need to hold the phone at a certain distance from their face and close one eye or the other as instructed. A grid is displayed for users to view and mark out areas (using finger or stylus) that they see distortions in, such as wavy or blurred lines. None of the ResearchKit active tasks explicitly did a vision test before so this for sure is a great new addition!

7. Environment SPL Meter

This task helps to measure the current noise in the participant’s environment. The task can be included as a step into any hearing/audio test or module and used as a gating step to ensure that the participant is in a suitable environment to complete the test being carried out.

Apple has also released a new sample app called the Parkinson’s Research Sample App that demonstrates how to leverage the new Movement Disorder API available with the CoreMotion framework.

Given the platter of new features added to the ResearchKit framework and the utility provisions being made by Apple for the developer community that uses it, there sure is a whole lot of scope for researchers to get to the next-level in the kind of research study mobile apps they employ for their studies.

As active developers and advocates of Apple ResearchKit-based apps, Boston Technology Corporation welcomes these updates to the framework. We look forward to working with these new features and bringing them to life in apps that drive medical research for a better tomorrow!



This post first appeared on Our Take On All Things Technology, please read the originial post: here

Share the post

Apple ResearchKit 2.0 – Here’s What To Look Out For

×

Subscribe to Our Take On All Things Technology

Get updates delivered right to your inbox!

Thank you for your subscription

×