I wrote an introduction that I didn't have time to say and so I thought it would be appropriate to post it here. It was written knowing that whilst data protection and regulation may be taken seriously, considering there are repercussions for those who don't comply (even if it is only a slap on the wrist), ethics is yet to be something high on the list of priorities. I tried to argue that it is something very important, even if it isn't mandatory.
Consumerisation of digital health is obvious by how often we use the word 'consumer' to describe the people using digital health products and resources. I believe ethics is as consumer facing as you can get. Before today I thought about ethics in terms of what happens with consumer data from an organisational perspective but with all the research potentially being done with people's data I think research ethics is increasingly relevant.
One of the first things I ask people when I meet them is who do you think has responsibility over a digital health resource. Unsurprisingly the finger is often pointed elsewhere. Even users seem to think that the app stores are providing more oversight than they actually are.
The reality is that regulations and data protection won't cover everything. They rely on intended use, on language, on transparency, on outdated technologies and terminologies. But why should a digital health company take ethical responsibility if it's not mandatory?
First of all the people who typically use healthcare resources, whether digital or not, are the kinds of people we should be protecting.
Second, just some examples of mistakes that can be made - Vtech took 10 days before they were told young children's data had been hacked, human error led to the disclosure of nearly 800 email addresses at a London HIV clinic and the NHS Apps Library closed in response to research papers that challenged the security and efficacy of the apps listed.
I am not naive, I don't believe these mistakes are going to disappear over night but if we keep shifting the blame we can never learn from them. Like James Joyce suggested, they are the portals to discovery. We need to be accountable.
A recent insight report from DHealth emphasised the role that branding can have in the next few years - it has the potential to increase loyalty, uptake, awareness. But building a brand also requires a realisation that there is traceable culpability.
What is traceable culpability? Whilst as an individual I may have the right to be forgotten it is unlikely the same will be accorded to a brand. An organisation's mistakes are not so easily forgotten by the immortal memory of the internet.
Even more than that, if you provide a digital health service you don't exactly want to undermine people's trust in the very thing you are delivering.
So how can you be accountable and build trust?
Ethical practice in research is about helping to maintain autonomy, providing value, being socially responsible, and minimising harm. In digital health the concept of informed consent means establishing what people can be reasonably expected to understand and this means educating them on what happens to data, how they will need to use the resource, what risks there might be.You must ensure that what you do has value and this usually means by showing it does through research. There is a lot of work being done to find different methodologies that fit more effectively with the iterative nature of technology. You need to think about your end users and how the technology will fit into their lives, and that's usually done by talking to them. Finally, it is incredibly important that you think about the unintended consequences.
All of these fit quite nicely into a model that has the potential to encourage uptake, considering the different factors that might be a barrier or of benefit.
But to do them you need to know your end users very well and that can be difficult in digital health. In social science we call it context collapse - once something is online the audience is potentially infinite. For instance a calorie counter might be used by people with anorexia because it allows them to input a very low weight. Iatrogenic effects are quite common in digital health.
There's also a need to future proof. What happens when we can search for people using facial recognition? When we can diagnose someone's mental illness through their use of a phone? When people can retrospectively access your entire brand history?
I believe that ethical working practices coupled with honesty and transparency can help you build trust around your brand and future proof your services.