Power, method …and the wrong question?


In a recent twitter conversation that grew out of a blog by Neil Crowther – Elaine James and Chris Hatton were discussing the lack of a social care research agenda. Chris suggested that any such research should look less like ‘proper’ research and that it should be able to access all areas in a way that academics would find it difficult to do. Instantly and without really thinking about it I suggested that a solution might be ‘user led and controlled’ and Elaine not unreasonably asked me to expand on what I meant. That should teach me not to intrude on other people’s twitter conversations!) Well this is it.

So how can we go about developing social care research that is able to “access all areas”, that doesn’t look like “proper research” and is led and controlled by service users?

The problem for most service users interested in pursuing such an idea is that the power to control research agendas invariably lies in the hands of others. Despite the growth in participatory methods, social research is understandably dominated by academics and constrained by funding streams that are usually set by government or by other powerful institutions and these institutions rarely fund research into question’s where the answers are likely to be problematic for them.  So right from the start the ability to direct any such research agenda is taken out of our hands.

Equally there are significant methodological issues. Access to participants is usually circumscribed by the all-powerful ethics infrastructure, which rightly seeks to protect the well-being of participants. Yet in doing so, this ethics infrastructure often puts control of research beyond the reach of pretty much anybody who isn’t a professional researcher. And then of course there is the issue around the extent to which the requirement to protect from harm actually ends up protecting institutions and its representatives as much as the other participants.

In short – method and power in research are closely intertwined. The relative vulnerability of many social care service users means that the systems that are designed to protect them from exploitation also serve to create barriers to research. It is this conundrum that makes the conduct of social care research time consuming and expensive and the idea of research that is “user led and controlled” deeply problematic.

But perhaps we are asking the wrong question. According to the Health Research Authority:

For the purposes of research governance, ‘research’ means the attempt to derive generalisable new knowledge by addressing clearly defined questions with systematic and rigorous methods

The guidance goes on to state that service evaluation within the NHS and Social Care should not be viewed as research and should be conducted in accordance with an organisation’s governance arrangements.

So instead of asking how we create and put in place a systematic research infrastructure for social care? Maybe we should start by asking what is it that we need to know. Are we looking to develop generalizable new knowledge? Or are we looking to compensate for the lack of independent user voice in social care governance arrangements? In other words could much of what we need to know be developed through a comprehensive and rigorous evaluations of service user’s experiences rather than research? After all and at the risk of sounding like Donald Rumsfeld; service user experiences that we haven’t collated are not “new knowledge” they are more like “known unknowns”!   

So perhaps the systemic challenge is something simpler and more foundational; namely properly embedding service user voice into existing social care governance. Any such governance and evaluation arrangement could be led by service users working collaboratively with providers and with the support of independent academics.  Ideally this would be done within the framework of a set of national guidelines. Which would provide all concerned with guidance on matters such as the publication of results and the handling of sensitive data disclosed as part of the evaluation.

As I have said: If framed in this way, the knowledge that we would develop would not be new and generalisable. It is local knowledge that every service provider should know about the experiences of the people who use their services and yet like all governance arrangements it is knowledge that could be used to develop a set of national indicators and an overview of the state of our social care infrastructure. This would provide a foundation for “proper” user-led research of the kind currently being developed as part of Disability Rights UK’s DRILL Programme.