27 aspects in which health apps assessment

frameworks could be enriched

 

On the basis of the extensive work done by the  Hub on health apps assessment frameworks (AFs), the following are concrete areas suggested to further develop the different assessment domains.

PRIVACY

  • Scant attention paid to data sharing for the patient benefit.

Well-defined data sharing for the patient benefit must be made available even across borders and should be requested by the AFs.

  • Lack of information about personal data management.

Little attention is paid by the AFs on how the personal data are managed in terms of access, retention policy and transmission methods. Additionally, analytics applied to the patients’ data should be disclosed and assessed.

  • Definitions of privacy and security overlap or are missing.

Definitions of the assessment domains of privacy and security are sometimes mixed and not addressed separately. Concise and clear definitions of privacy and security in AFs as well as for the user are missing.

SAFETY

  • Scant attention paid to user input information/data.

User input information has great impact in diagnosis and monitoring, and for that reason specific attention to this issue needs to be paid in the AFs. Safety of users/patients/citizens depends also on the validity of their own inputs and on ways an app or mHealth solution can validate and detect problems and deviations of the data collected. Reporting and alerting these cases to health professionals is also an important measure of safety that relates to user input information.

VALIDITY

  • Referring to a physician is not a common element of assessment.

To assess if an app or mHealth solution explicitly informs the users to refer to their physician is an important feature. Refer to a physician can provide the necessary alert to users depending solely on the information a solution provides. This feature complements the necessary and common verifications of the information provided by the mHealth solution.

  • Accountability to the information given by the solution is not often assessed.

Most AFs do not evaluate if an mHealth solution provides accountability to the information given by the solution or to the sources of information. Accountability is central to discussions related to problems in the public sector, such as health, and provides security and safety for the users.

TECHNICAL STABILITY

  • Little mention is made of regular application monitoring and tracking the number of app crashes and uptime.

Frequently asked questions in AFs are about the mechanism of error reporting from the user side, documenting and tracking identified errors, and ensuring they are corrected. There is rarely a mention of regular app monitoring by the tech team.

A simple example of how to resolve this gap is asking if “the app produces errors log or actions monitoring in an external system” (i.e. Tic Salut Social).

A more detailed and improved question would be like this one asked by NHS Digital: “Do you proactively monitor running of systems and system components to automatically identify faults and technical issues? Describe your monitoring processes and procedures.”

There is almost no mention of the tracking number of app crashes and application uptime, which is important not only as an indicator of technical stability but as an important factor in the user experience.

ACCESSIBILITY

  • Very limited recommendations on accessibility standards.

Most of the AFs do not explicitly recommend the web content accessibility guidelines (WCAG 2.1 standard), which makes it difficult to improve the accessibility of mobile health for general population.

  • Limited understanding of accessibility.

Many AFs consider accessibility as just secondary functionalities of the user interface, such as font size and contrast, while other aspects, such as an in-depth analysis of the populations needs in terms of disabilities (e.g., percentage of population who is colour blind or have sight disabilities), are non-existing.

USER EXPERIENCE / USABILITY

  • Lack of reference to usability standards.

Little mention or recommendations on usability standards (e.g., ISO 9241-210:2019), which makes it difficult to improve the usability of mobile health applications for the general population.

  • User experience is generally absent or only mentioned as a side note.

Most AFs do not consider the user experience as a central part of the adoption and scalability of mobile health.

  • General misinterpretation of user experience and usability.

Measures of effectiveness, efficiency and satisfaction are not generally among the recommendation for evaluating mobile health applications.

  • Little participation of users in the design and evaluation of mobile health.

There is a general understanding among the AFs that users, if involved, should be only part of the late test of the applications. Inputs in the design stage are seldom, neither the space for decision-making input.

TRANSPARENCY

  • Information gathered by the app are often not fully disclosed to the user.

Many AFs do not fully cover the question of what information is handed over to the app, which interests are included by stakeholders and how algorithmic app components deal with the available information. A clear and concise description of collected and processed information for the user is missing.

  • Lack of information about parties involved.

A clear statement about the stakeholders and their roles (development, financing, etc.) involved in a mHealth application is often missing.

  • Information about data processing algorithms is often not provided.

Basic but concise information about data processing algorithms is not perceptible for all the stakeholders. Most AFs available today do not require this.

RELIABILITY

  • Few specific aspects about reliability are assessed.

Key questions might be used to complement assessment of reliability, I.e.:

  • Are recurrent bugs and security bugs in the software documented?
  • Can the terms and conditions or use-based warnings be documented?

If measurements are collected, their metrological characteristics must be transparent so that their levels of precision and accuracy can be understood. Precision should be appropriate for the expected use of the product.

  • Specific testing methods for reliability are missing.

Reliability assessment, among others, uses mostly generic terms and questions. It would be fundamental to also evaluate if specific testing is done to verify reliable mHealth solutions. This would ensure proof and confidence in the solutions.

INTEROPERABILITY

  • Interoperability is not part of the majority of AFs.

Interoperability as a topic as such is not included in most of the AFs. The AFs tend to leave the term interoperability as a side note and do not discuss means/requirements to ensure interoperability as such.

  • Lack of direct involvement of Standards Development Organisations (SDOs) and reference to standards.

The AFs that addressed the subject of interoperability did not cover it in sufficient detail. They do not reference specific IT standards. Exceptions are frameworks that originate from SDOs like the HL7 functional framework for health apps.

  • Poor contribution to the goal of interoperability between apps.

The AFs do not ask that health apps disclose the used data models and services to facilitate interfaces with the app using inter process communication capabilities provided by the operating system. A communication between apps directly on the users’ devices is therefore not possible.

EFFECTIVENESS

  • Measuring desired or intended result (e.g., improved health outcome) is covered only in a few AFs.

AFs should capture if the application can measure a desired or intended result in everyday use and particular environments. Some AFs do ask questions like if the app sets goals for users or allows them to set goals for themselves; if goals achievement is tracked; if there is a visibility of progress, so some results are measured.

There is a gap in measuring results by the app provider, controlling if data generated and recorded are accurate, if it is relevant to the range of values expected in the target population, and if it is possible to detect clinically relevant changes or responses.

Also, there is a gap in demonstrating relevant outcomes of application (e.g., behavioural or condition-related user outcomes such as reduction in smoking or improvement in condition management, evidence of positive behavioural change, user satisfaction), and presenting comparative data (e.g., relevant outcomes in a control group, use of historical controls, routinely collected data) by the app provider.

Note: example for relevant outcomes and present comparative data taken from Evidence Standards Framework for Digital Health Technologies: https://www.nice.org.uk/about/what-we-do/our-programmes/evidence-standards-framework-for-digital-health-technologies

 

  • Few questions about ethical concepts are explicitly included in the AFs.

While assessing AFs for effectiveness criteria, specifically if the framework can capture the app’s applicability by distinguishing different subgroups of users (e.g., demographics, age, gender, health literacy, medical condition, health status), it has been observed that most AFs do not include questions concerning to ethical concepts.

When they exist, questions regarding ethical concepts are spread through different assessment domains.

SCALABILITY

  • Interconnection between apps and services should be evaluated in mHealth development.

There is a gap in assessing whether an mHealth solution can interconnect to several services and platforms. If such assessment is done, there is a benefit for the developer to provide its service to a wider audience, leveraging and expanding a solution for the wellbeing of its users. One recommendation that can arise from this is to allow development of the mHealth solution to be independent and parallel to the interconnection mechanisms.

SECURITY

  • Few AFs cover where the information storage is done.

Concerns about jurisdiction of where the data is stored can have an impact also on other domains, such as privacy. Having different rules in different geographies should be an element of assessment. Also, this can impact models of monetization for the mHealth solution developers that can end up not being transparent to the user.

  • Data security and data sharing is not fully covered.

Data security is fundamental for trusted and secure use of mHealth. Across the several AFs analysed, most of them are concerned about data security, mostly related to privacy, and data sharing, mostly related to third party involvement. Nevertheless, the lack of ubiquitous focus of these two security aspects can be a barrier for a universal assessment for mHealth, and so there is a need to overcome this issue.

  • Encryption is explicitly addressed in only a few AFs.

The main goal of encryption is to prevent unauthorized parties from reading private, confidential or sensitive data. To this end there is a reduced number of AFs that address this particular subject explicitly. There is a common procedure in the AFs to either recommend or assess security in transmission of data and storage of data, but there is a lack of explicitly recommend a encryption method. Furthermore, there are some AFs that link their rules and criteria to the GDPR regulation that also recommends encryption for the purpose of dealing with personal data but it misses a specific solution. The recommendation resulting from this analysis is for an AF to assess if a specific method of encryption is in use.