Frequently Asked Questions (FAQs)
Section 1 - Definitions
Question 1: What is an IR-score?
An IR-score essentially measures the distance of a value from the median (average). A positive IR-score indicates that the value is above the median, whereas a negative IR-score indicates that the value is below the median. A measure with a IR-score below -1.5 or above +1.5 is categorised as a possible outlier. A measure with a IR-score below -3 or above +3 is categorised as a probable outlier and may indicate the need to investigate further.
Question 2: What are the indicators in each indicator group?
See the Methodology and Interpretation page.
Section 2 - Data
Question 1: What is the source of the local authority finance data?
The finance data is revenue outturn (actual) (RO) from MHCLG. Guidance on local authority financial data is available
here
Question 2: Where have the local authority “Statistical neighbours (CIPFA)” come from?
Chartered Institute of Public Finance and Accounting (CIPFA) nearest (see here for more detail) attempts to relate LAs by their traits by using descriptive features of the area each authority administers such as population, socioeconomic, household and mortality characteristics, rather than the services it provides. You may also find PHE's technical briefing on comparators helpful. (see here for more detail)
Question 3: What time period does the data refer to?
For all of the charts under “Benchmark against other areas” the latest available data has been used and the timing of the most recent update can be found on the Update Schedule page. You can also find more information beside each of the charts and in the Data Sources and Indicators page. The data shown on the charts may include some lag depending on the specific data source. Ideally one would expect to spend and then measure the impact on future outcomes. However in SPOT, the outcomes precede the spending. This is purely pragmatic and can be justified in two ways: (a) you have the latest data available and (b) prior spend probably correlates well with current spend, and future outcomes probably correlate well with past outcomes.
All spend data is in financial years and eg. 2018 refers to the year 2018/19. Outcomes data is in either calendar, academic, or financial years which is labelled on the x axis and can be checked for each individual indicator in the Data sources and Indicators page.
Question 4: How have you accounted for the structural changes to local authorities since April 2019?
When local authorities have been replaced by new authorities with different geographical boundaries, the new authorities are included in all charts when spend and outcomes data becomes available. Historic data for former local authorities will only be included in the spend trends and outcomes trends charts. When a two tier county council has been replaced by a single tier unitary authority covering the same geographical area, all historic data for the county council is shown for the unitary authority. Former local authorities are also not included in the group average calculations (e.g. authorities in your IMD decile) in the years where they no longer exist.
Section 3 - Methodology
Question 1: How are SPOT's IR-scores calculated?
See the Methodology and Interpretation page.
Question 2: Why have you changed the methodology to use IR-scores instead of z-scores?
The interpretation of particular values of z as outliers relies on the assumption that the underlying data is normally distributed (Gaussian). Whilst transformation methods exist to transform many distributions to Gaussian (thus allowing us to use z-scores), these are often convoluted and still require that we are able to identify the distributions behind each indicator, which is not an easy procedure.
We therefore decided to use IR-scores as this method for identifying potential outliers is distribution-independent.
Question 3: How have organisations been allocated to deprivation deciles?
MHCLG (Ministry for Housing, Communities and Local Government) has ranked all LSOA (lower level super output areas) in England on a wide range of measures of deprivation, and produced IMD (Index of Multiple Deprivation) scores in 2015 and 2019. Local authorities have been allocated equally to ten deprivation deciles based on the IMD 2015 average score from 2013 to 2018 and then based on the IMD 2019 average score from 2019 onwards, using “Deprivation decile 0 - most affluent” to “Deprivation decile 9 - most deprived”. (see here for more detail).
Question 4: How has spend per head per annum been calculated?
All spend figures are spend per head per annum. Spend per head has been calculated by dividing total spend by total resident population. No attempt has been made to use relevant sub-populations as denominators for sub-population specific spend.
Question 5: How are national and group values calculated?
The values used for national (England) and group categories (CIPFA neighbours, Region, Deprivation decile) have been calculated from the measures for all organisations by simple averaging, rather than by using either published values or more sophisticated algorithms particular to each measure. For example, when selecting England as a comparator, the value is derived from taking the average of the values across all local authorities in England. This does mean that some national and group values may be different to published values. The latter should be taken as the definite values. This pragmatic approach used here, does mean that comparison with a wide range of peer groups is possible, however it does result in some loss of precision in the values given for groups and the national value.
Question 6: How have you chosen which outcome indicators to compare to which spend indicators?
PHE experts grouped indicators according to the programme they were most strongly related to, and then mapped spend to these outcomes after examining which spend area the indicator was most likely linked to.
Section 4 - Interpretation
Question 1: How do I navigate through the charts on this website?
A webinar explaining the redevelopment of the SPOT and how to navigate this site is available to watch here
Question 2: Is my spend causing high/low outcomes?
Correlation is not the same as causation. This can be seen by looking at the spend vs outcomes graph for local authority spend on public health v outcomes for public health. Those authorities that spend the least seem to have the best outcomes, and those that spend the most seem to have the worst outcomes. Using a comparator group such as deprivation, starts to explain the issue: Deprived areas have the greatest need and worse outcomes; affluent areas have the least need and best outcomes.
Question 3: What is a “good outcome”?
SPOT enables users to compare spend are outcomes against other areas and across time. This may help to give some indication of whether the outcomes being achieved are better/worse than other areas or are improving/worsening. But results should be interpreted with caution. It is not always easy even to determine in which direction is desirable. As one example, a higher number of poor outcomes in the data may reflect worse outcomes in the population or it could reflect better detection of outcomes in the local area. On top of this, there are many other reasons that outcomes in one area might differ to outcomes in another, even between statistical neighbours or other comparator groups - that might it tricky to interpret what is a good or poor outcome for a local authority area.
Section 5 - Errors
Question 1: Why are there no outcome measures for my programme area?
This is possibly because we have not been able to identify any published outcome measures for this area. If you are aware of existing measures which have not been used, please contact healtheconomics@phe.gov.uk.
Question 2: Why is a key outcome measure not included in SPOT?
If you are aware of outcome measures which are not included but should be included, then please feedback details including a publicly available URL. If appropriate, further measures will be incorporated into the tool. If you don't agree with the published financial figures or their categorisation, you may wish to talk to your local finance team.
Question 3: Why is data missing?
You can find information on the most recent year available for each indicator in the Data Sources and Indicators page. If there is more recent data available, this should be updated in the next SPOT data refresh. If the indicator is no longer being updated (i.e. it has been discontinued) it will appear in the SPOT until it is three years out of date, at which point it will be removed from the SPOT, but remain available in the Fingertips API. Data in the SPOT goes back to 2013 where available.