Incentivising employers to train low-skilled workers: evidence from the UK Employer Training Pilots Laura Abramovsky*, Erich Battistin†, Emla Fitzsimons§ Alissa Goodman§, Helen Simpson‡ January 2009
Abstract: We use unique workplace and employee-level data to evaluate a major UK government pilot program to increase qualification-based, employer-provided training for low-qualified employees. We evaluate the program’s effect using a difference-indifferences approach. Using data on eligible employers and workers we find no evidence of a statistically significant effect on the take-up of training in the first three years of the program. Our results suggest that the program involved a high level of deadweight, and that improving the additionality of the new national program is crucial if it is to make a significant contribution towards government targets to increase qualification levels. Keywords: Government policy; Human capital; Training; Welfare programs JEL classification: M53; J24; I38
Acknowledgements: This paper draws on analysis carried out in a UK Department for Education and Skills Research Report http://www.dfes.gov.uk/research/data/uploadfiles/ACF8A97.pdf. We would like to thank Jeffrey Smith, two anonymous referees and seminar participants at the IFS, Birmingham, the Annual Congress of the European Economic Association 2006 and the Conference on the Analysis of Firms and Employees 2006 for helpful comments. The authors would also like to thank the ESRC Centre for the Microeconomic Analysis of Public Policy at the IFS for financial support for this paper, and the UK Department for Education and Skills for funding the evaluation. All errors are the responsibility of the authors. Correspondence:
[email protected];
[email protected];
[email protected];
[email protected]; and
[email protected]. * Institute for Fiscal Studies, London and University College London †
University of Padova and IRVAPP
§ Institute for Fiscal Studies ‡
Centre for Market and Public Organisation, University of Bristol and Institute for Fiscal Studies
1
1
Introduction
A key aim of many governments is to raise national skill levels to improve long-run productivity performance. This paper examines the effectiveness of one policy approach, now implemented in the UK, which specifically targets training by low-qualified individuals in employment. The setting is of particular interest, since as many as 1 in 3 working aged adults (aged 1964), and 1 in 3.5 employees in Great Britain lack skills equivalent to the basic schoolleaving qualification. Compared to other OECD countries such as Sweden, Finland, the USA and Germany, the UK has a significantly larger proportion of adults with low qualifications and a smaller proportion holding intermediate level qualifications (see HM Treasury, 2005). Although considerable opportunities exist in the UK for the low-skilled long-term unemployed to receive government subsidised qualification-based training, the opportunities for those who are in work have generally been more limited. Market failures may exist that prevent efficient levels of workplace training from taking place and may justify intervention. In general, theory suggests that where individuals are credit constrained there may be under-provision of training in transferable skills, which basic qualifications generally impart (see, for example, Becker, 1962; and Katz and Ziderman, 1990). There may also be informational failures, imperfect competition in markets for skills, or positive externalities associated with training, which lead to underprovision (see, for instance, Acemoglu and Pischke, 1999; Chang and Wang, 1996; and Stevens, 1994). Although little evidence exists on the magnitude of these potential market failures, it is certainly the case in the UK that the likelihood of an employee receiving formal training at work is positively related to their existing qualification level, as shown in Figure 1. In response to such concerns the UK government launched “Train to Gain” in April 2006. Train to Gain offers free training either to a basic skill qualification or to a National Vocational Qualification (NVQ Level 2 or 3) to employees who do not possess the basic school leaving level of qualification (classified as a Level 2 qualification), or who lack basic literacy, numeracy or language skills.1 The large majority of training delivered
1
Level 2 is the basic expected school-leaving qualification at age 16. UK Level 1 is a formal qualification that is below the basic school-leaving standard. Level 3 is a more advanced school-leaving qualification, generally at 18. Level 4 is a university degree or other higher education qualification. Steedman et al. (2004) provide a guide to international equivalents in France, Germany, Singapore and the US. For the US they
2
through Train to Gain is through NVQ qualifications. These are occupation-based qualifications, offering on-the-job training teaching work-related tasks that are designed to help individuals do jobs more effectively (more details are provided below). In addition to free training, employees taking part in Train to Gain receive a number of hours of paid time off for training during working hours, and employers with fewer than 50 employees receive wage compensation for these hours off. The package also includes a free independent brokerage service to help employers identify their training needs and source appropriate training provision. Train to Gain is a very big scheme financially, projected to cost around £1 billion (0.06% of GDP) in 2010-11. This represents over a third of the government’s adult skills and further education budget. Figure 1: Employee training (in last 3 months and 4 weeks), by level of highest qualification Proportion of employees reporting receiving training
.4
.3
.2
.1
0 Level 4+
Level 3
Level 2 Last 3 months
Level 1
Other
Trade
None
Last 4 weeks
Source: Labour Force Survey, Spring 2005. Employees in Great Britain, aged 19-64. Note: see footnote 1 for a definitions of Level 1 to Level 4 qualifications. Trade qualifications are “recognised trade apprenticeships”, while “other” refers to qualifications that either cannot be classified or their classification is not known.
Before implementing Train to Gain the government piloted elements of the program in selected areas of England in the form of the Employer Training Pilots (ETP). The pilots ran
classify those with less than high-school graduation and 50% of those with high-school graduation only (the lower attainers) to less than Level 2. 50% of those with high-school graduation only are classified to Level 2. Level 3 corresponds to some college education but no degree. Above Level 3 qualifications correspond to bachelor or higher degree, professional school degree, and associate degree (academic or vocational).
3
between 2002 and 2006 and gave financial incentives to employers to provide qualification-based training to their low-qualified employees. The precise design of the ETP differed from the subsequent Train to Gain program, in particular it offered basic skills and NVQ Level 2 training, but did not offer NVQ Level 3. There were also important differences with respect to wage compensation for larger employers, and full details are given in section 2. The government commissioned an extensive evaluation of the ETP which included testing the effects of the pilots on the take-up of training amongst employers and employees. The design of the evaluation involved collecting unique surveys of employees and employers, in pilot and non-pilot (control) areas, before and after the introduction of the pilots (see Abramovsky et al., 2005). This paper focuses mainly on the early impact of the ETP on the provision of training to low-qualified employees by their employers. It also provides evidence on the characteristics of employers who provide such training, and the lowqualified employees who receive it. Although there is an extensive literature assessing the effectiveness of policy interventions to encourage training, this paper marks something of a departure from this. First, the ETP is an intervention aimed at all low-qualified employees rather than at the unemployed or at a specific group such as young people. To this extent it differs from other training interventions involving the use of subsidies, which have often been open to the unemployed and have been coupled with employment initiatives (see Heckman, 1998, for a summary of evidence on the effectiveness of more general training and employment programs in the US, and Heckman et al., 1999, for evidence from programs operating in a range of countries). There is relatively less evidence on the effectiveness of comparable interventions targeted at employees.2 Second, what evidence there is on policies aimed at employees has tended to focus on the longer-term effects on participants, e.g. their wages and employment trajectories (for example, see Krueger and Rouse, 1998, who investigate the effects of subsidised workplace education programs at two companies in the US). By contrast we are primarily interested in estimating the deadweight associated with the policy, by considering the early effects of the
2
HM Treasury (2002) gives some details of international approaches to workforce training more generally, including the use of training levies in France and Australia and vocational training programs for young people in Germany.
4
intervention, and whether or not the subsidies embodied in ETP increase overall levels of training in the areas where it has been implemented. Perhaps closest to our research is Leuven and Oosterbeek (2004) who examine a change to Dutch tax law in 1998 that entitled employers to a tax deduction when they trained employees aged 40 and over. The authors consider the effect of the age-related tax deduction on both training and wages. Whilst they find that the training participation rate of workers aged just above 40 is around 15-20 percent higher than that for workers aged just below 40 (the control group), they go on to show that this difference is almost entirely driven by a reduction in training for those aged just below 40, rather than an increase in training amongst the eligibles. We evaluate the effect of the ETP on the take-up of training using a difference-indifferences approach. We compare changes in receipt and provision of training across lowqualified employees and workplaces employing such individuals, in pilot areas and a set of comparable control areas, from before to after the implementation of the program. We estimate that on average around 7 percent of eligible employers would have provided qualification-based training to their eligible employees in the absence of the pilot, and we find no evidence of a statistically significant impact of the pilot on this proportion, or on the fraction of eligible employees undertaking qualification-based training in the early years of the pilot. The evidence suggests therefore, that unless the level of ‘deadweight’ improves substantially from its initial levels, the likely impact of an equivalent national program on UK skills and productivity will be at best very modest. The paper is structured as follows. Section 2 sets out the main features of the ETP. Section 3 describes the data collected for the ETP evaluation and the evaluation methodology. Section 4 discusses our main results and a series of robustness checks and section 5 concludes.
2
The Employer Training Pilots
The ETP were established in September 2002 in six Local Learning and Skills Council (LLSC) areas in England (each covering workforces ranging from just under 300,000 to just over 1 million individuals).3 After their initial introduction, the pilots were extended in
3
At the time of operation of the ETP, LLSCs were responsible for the planning and funding of post-16 vocational education and training. There were 43 in England. They have subsequently been replaced by larger regional Learning and Skills Council bodies.
5
both length and coverage. Six new LLSC areas were introduced to the ETP in September 2003, and a further eight in September 2004. By then ETP covered around one third of the English workforce.4 All three ‘waves’ ran until April 2006. The evaluation presented in this paper covers ‘first wave’ areas, those involved in the pilot starting from September 2002, and ‘second wave’ areas, those involved from September 2003. The primary aim of the pilots was to increase the level of training provided by employers and received by employees who lacked the basic school leaving qualification (categorised as Level 2), and who would not otherwise engage in qualification-based training.5 The policy implemented in the pilot areas combined four elements, namely: •
Free or subsidised training to a basic skills or NVQ Level 2 qualification;
•
Paid time off for training (funded for either 35 or 70 hours);
•
Wage compensation (paid to employers for a maximum of 35 or 70 hours time off);
•
Information, advice and guidance to employers and employees.
Participation in the ETP program was voluntary, and the decision to participate made by the employer. Employees eligible for the program were those who did not possess a first Level 2 qualification of any kind (whether derived through an academic or a vocational route), while eligible employers were those employing eligible employees. The large majority of training provided through ETP was via NVQ Level 2 qualifications. These qualifications are all work-related, competence-based qualifications. Individuals are assessed on their competence in practical assignments that take place in the workplace or in a realistic working environment, together with a portfolio of evidence. At Level 2 the candidate is expected to demonstrate the ability to perform variety of tasks with some guidance or supervision, some of the activities are complex or non-routine, and there is some individual responsibility and autonomy. An indication of the time commitment required by trainees is given in Hillage et al. (2006) who report that it took on average 110 hours for a trainee to complete an NVQ2 course under the ETP. Of this, around one half of the time was spent in contact with the training provider during working hours, a further quarter was spent on independent learning and gathering evidence for assessment. The final quarter was spent outside working hours and was usually unpaid. 4
Adult education policy in Scotland and Wales and Northern Ireland is covered by different arrangements.
5
Other objectives included tackling barriers to the provision of training to qualifications for low-skilled employees, and encouraging more flexible and responsive provision of training to meet employers’ needs. See HM Treasury (2002).
6
The exact details of the policy varied across areas and with the size of employers (small, medium and large), with the main differences being in the levels of wage compensation and in the number of hours of time off for which compensation was payable. Table 1 sets out the different policy variants implemented in each of the first and second wave ETP areas which are the focus of this paper. Pilot areas offered different combinations of wage compensation paid as a percentage of a nominal basic rate (between 110 and 150 percent, 75 and 120 percent and up to 75 percent for small, medium and large workplaces, respectively), and hours of time off (multiplied by the wage compensation hourly rate to give the maximum amount an employer could receive). This design was created by the government, in an attempt to allow the effects of different policy elements to be tested within the framework of the evaluation. Table 1: First and second wave ETP areas, and policy variants
LLSC area
Level of wage compensation (percentage of hourly pay, by size of firm) Small Medium Large (under 50 (50 to 249 (250 or more employees) employees) employees)
1st wave (started September 2002) Greater Manchester Derbyshire Essex Tyne and Wear Wiltshire and Swindon Birmingham and Solihull 2nd wave (started September 2003) Shropshire Leicestershire Kent and Medway East London Berkshire South Yorkshire
Time off (hours)
150% 130% 110% 150% 130% 110%
120% 100% 75% 120% 100% 75%
75% 50% 0% 75% 50% 0%
35 35 35 70 70 70
150% 130% 0% 150% 130% 110%
120% 100% 0% 120% 100% 75%
75% 50% 0% 75% 50% 0%
35 35 35 70 70 70
In order to provide further information about the nature and size of the treatment package, Table 2 translates these different variants into maximum per-trainee payments assuming a ‘basic’ pay of £5.00 per hour (the National Minimum Wage was £4.50 per hour from October 2003). In the case of those pilots providing wage compensation this varied from £87.50 to £525 per worker in the most generous cases. In one pilot area no wage compensation was offered for the time off for all workplace sizes. The largest element of subsidy, however, was the free tuition. This was paid direct to the training provider, and ranged from £500 for a basic “Skills for Life” qualification to £1,200 for an NVQ2 in construction. Payments to training providers were staged (20 percent on registration of 7
trainees, 30 percent half way through the course, and the final 50 percent on completion of the course). Training providers often played a role in recruiting employers into the ETP program. The right hand panel of Table 2 gives some indication of the size of the program by showing year one take-up rates of the ETP program among the eligible employer and employee populations in each pilot area, (defined as the number of participants as a percentage of the estimated eligible populations). Take-up varied markedly across different pilot areas, but was generally considerably larger in the second wave pilot areas compared to the first (ranging between 1.7% to 7.2% of eligible employers in the first wave pilot areas, and between 2% and 17.9% of eligible employers in the second wave). There is a strong negative unconditional correlation between take-up rates and area size. Additionally there is some indication of a negative unconditional correlation between employer take-up rates and the generosity of compensation, and a very weak positive unconditional correlation between employee take-up rates and the generosity of compensation. While the ETP program represented a new approach to encouraging employers to provide job-related training to the low skilled, a small minority of employers were providing this type of training in the absence of the program. Our Employer Survey suggests that around 7 per cent of eligible employers were providing ‘ETP-type’ training to their low-qualified employees in the absence of the program. Further information about the characteristics of these employers is provided in the descriptive statistics section 4.1, Table 5 below.
8
Table 2: Maximum compensation and ETP take-up rates Maximum per trainee compensation LLSC area
Eligible populations and ETP take-up Estimated eligible employers
Year 1 employer take-up rate
Estimated eligible employees
Year 1 employee takeup rate
£131.25 £87.50 £0.00 £262.50 £175.00 £0.00
18,687 7,316 14,387 6,601 5,289 8,812
2.9% 2.7% 5.3% 3.9% 7.2% 1.7%
322,601 140,900 233,310 127,194 86,485 154,256
1.3% 1.2% 1.8% 2.1% 3.2% 1.0%
£131.25 £87.50 £0.00 £262.50 £175.00 £0.00
3,549 7,281 9,742 17,129 6,895 7,665
17.9% 8.7% 9.7% 1.9% 9.9% 10.0%
58,041 131,486 229,024 238,275 96,078 187,802
9.2% 3.9% 2.0% 2.3% 4.3% 4.2%
Small (under 50 employees)
Medium (50 to 249 employees)
Large (250 or more employees)
£262.50 £227.50 £192.50 £525.00 £455.00 £385.00
£210.00 £175.00 £131.25 £420.00 £350.00 £262.50
£262.50 £227.50 £0.00 £525.00 £455.00 £385.00
£210.00 £175.00 £0.00 £420.00 £350.00 £262.50
1st wave Greater Manchester c Derbyshire a c Essex a c Tyne and Wear c Wiltshire and Swindon a c Birmingham and Solihull a c nd 2 wave Shropshire c Leicestershire a b c Kent and Medway a c East London a Berkshire a b c South Yorkshire c
Note: compensation figures in this table are based on an assumed hourly wage of £5.00, e.g. for small firms in Tyne and Wear the per-trainee payment is £7.50 per hour (150% of £5.00) up to a maximum of 70 hours. a Included in analysis using Employer Survey, b Employee Survey, c LFS. Source: Participant data from Hillage et al. (2006). Eligible employer populations estimated using UK Inter Departmental Business Register Data and Employers Survey. Eligible employee populations estimated using NOMIS Local Labour Force Survey data.
9
3
Data and evaluation methodology
In this section we outline our data on employers and employees, and our methodology for evaluating the effect of the program on both the provision of training by employers and the take-up of training by employees. 3.1
Data
Employer and Employee Surveys Two specially commissioned surveys were collected in summer 2003 (first round) and summer 2004 (second round) to evaluate the effect of the ETP on the take-up of training. Interviews were carried out with random samples of employers and employees to obtain information on training activity. The Employer Surveys were carried out in four ‘first wave’ pilot areas and four ‘second wave’ pilot areas, indicated by a in Table 2, as well as two selected control areas, Bedfordshire and Sussex. The Employee Surveys took place in two second wave pilot areas, indicated by b in Table 2, and the same two control areas. To ensure comparability of results across areas, and to ensure that the characteristics of pilot and control areas were aligned, the areas surveyed were chosen to be similar in terms of their industrial structure (their industrial composition and the workplace size distribution) and labour market indicators (the unemployment rate and the proportion of individuals receiving training within the last 13 weeks) in the five years prior to the implementation of the pilot.6 Since the number of survey areas was restricted by the overall evaluation budget, the pilot areas surveyed were also chosen so as to try and maximise sample sizes for particular policy variants. A summary of where and when the surveys were conducted is presented in Table 3 (shaded cells in the table indicate periods in which the ETP was in operation). The ETP began in the first wave pilot areas in September 2002 and in the second wave pilot areas in September 2003. Each survey round of the Employer Surveys collected information on training activity in the year prior to the interview, in first wave pilot areas, second wave pilot areas and the control areas. The first survey was carried out in summer 2003 covering the period September 2002 – August 2003, thus providing pre-program information only for employers in second wave pilot areas. The second survey was carried out in summer 6
The same design has been used in the evaluation of other important programs in the UK. See, for example, Battistin et al. (2005) and Blundell et al. (2004). Similarities across areas were established with respect to area level indicators defined from a principal component analysis of the set of area characteristics.
10
2004 covering the period September 2003 – August 2004. To retrieve pre-program information for the first wave pilot areas, retrospective questions relating to the period September 2001 – August 2002 were added to the first, summer 2003, survey. The same strategy was pursued in the second survey, providing retrospective information for the period September 2002 – August 2003.7 Therefore the information from the Employer Surveys consists of both contemporaneous (C) and retrospective (R) data as shown in the first panel of Table 3. Two Employee Surveys were carried out, in summer 2003 and summer 2004, but only in second wave pilot areas and in the control areas, and only contemporaneous information on training was collected. Table 3: Summary of survey information. C: contemporaneous data; R: retrospective data Employer Surveys 1st wave pilots (started September 2002) 2nd wave pilots (started September 2003) Control areas
Summer 2002 R R R
Summer 2003 R C R C R C
Summer 2004 C C C
Employee Surveys 2nd wave pilots (started September 2003) Control Areas
C
C
C
C
C
C
C
C
C
C
C
C
C
LFS 1st wave pilots (started September 2002) 2nd wave pilots (started September 2003) Control areas
The Employer and Employee Surveys were independent of each other, in that the sample of employees was not specifically chosen from the workplaces that were surveyed. The Employer Surveys interviewed the same employers in both survey rounds, while the Employee Surveys carried out interviews of random samples of individuals in both years.
7
The survey design provides contemporaneous and retrospective information on training activity for the period September 2002 – August 2003, which we use to investigate the quality of the retrospective data as discussed below.
11
Therefore the information available for the analysis consists of repeated cross sections of employees from second wave pilot and control areas before and after the implementation of the program, and longitudinal information on employers in pilot and control areas that, only for the second wave pilots, was collected before and after the launch of the ETP. More detail on the sampling design and survey methodology can be found in Abramovsky et al. (2005). The 2003 Employer Survey had a response rate of 41%.8 The 2004 Employer Survey successfully re-interviewed around 67% of the 23,000 eligible employers interviewed in 2003. In our analysis we use sample weights stratified by LLSC area, industry and employer size-band to control for attrition and to make sure results are representative of the underlying eligible population. We assume that survey non-response is random with respect to training provision within these cells. We also investigated the possible effects of selective non-response to the second wave survey on the employers’ analysis. We found that the probability of a workplace not being successfully re-interviewed in the second survey (i.e. exiting the sample) does not depend on whether it provides training, nor on whether it is in a first or second wave pilot area relative to a control area, nor on whether it provides training and is in a pilot area (an interaction between training provision and being in a pilot area), conditional on workplace characteristics (see Table A1 in the Appendix).9 The 2003 Employee Survey involved face-to-face interviews with around 5,500 eligible individuals. The overall response rate was around 55%.10 The 2004 Employee Survey was
8
This response rate is a lower bound, calculated as the number of completed interviews with eligible employers expressed as a percentage of the total of completed interviews with eligible employers plus refusals to be interviewed. Unfortunately we do not know at what stage the refusal occurred, or the fraction of refusals that were by eligible employers. The denominator will therefore include some ineligible employers who refused to respond before the screening stage (used to identify eligible employers). Assuming that the ratio of eligibles to ineligibles among the refusals is the same as the ratio of completed interviews to screened out ineligibles results in an upper bound response rate of 67%. 9
We do find that large workplaces are more likely to exit the sample, and that workplaces operating in the finance and business services and the education and public sectors are more likely to remain in the sample. However we include these workplace characteristics in our analysis and construct population weights on the basis of workplace-size and industrial sector cells. As a further check to make sure that attrition is not a source of bias we estimated additional specifications for employers, both considering all observations in the first survey and by restricting the sample to employers who were successfully re-interviewed in the second survey. We find that the results using the two samples are not statistically different from each other. 10
This response rate is the number of completed interviews with eligible individuals expressed as a percentage of the estimated number of eligible individuals initially contacted. The 55% is the product of a 73% response rate to the screening phase, which was used to identify eligible employees (i.e. those qualified to below Level 2), and 76% achieved interviews out of the total eligible contacted, and hence assumes that agreement to participate in the screening phase was unrelated to eligibility status.
12
carried out by telephone and around 8,000 completed interviews were obtained, with an approximate response rate of 45%. As discussed above the Employer Surveys included retrospective questions to allow us to obtain information on training activity in first wave pilot areas before the ETP was implemented. To check the quality of this recall information we compared data from two self-reported questions on training provision for the period September 2002 – August 2003, one contemporaneous taken from the first Employer Survey and one retrospective taken from the second Employer Survey (as described in Table 3). We looked at the extent to which the answers to the retrospective questions accurately reflected the answers to the contemporaneous questions across the different areas. Retrospective information yields a lower proportion of workplaces that provide training: the proportion of workplaces that report providing training using the contemporaneous data is 8.85%, whereas using the retrospective data it is 4.34%. The simple correlations between the contemporaneous and retrospective data for employers are 15% and 20% for our two measures of employer eligibility (qualification-based and occupation-based, discussed below), and these correlations are the same across pilot and control areas. That is the quality of retrospective answers is very similar across pilot and control areas and the accuracy of reporting training provision through the retrospective questions does not depend on the ETP being in operation. Despite the relatively low correlation between the two survey measurements, we find that the results for second wave pilots are robust with respect to the choice of measuring pre-program outcomes from contemporaneous or retrospective information. This we take as evidence that the effect of recall errors on the estimation of the effect of the policy, and subsequent policy implications, is likely to be negligible. A key part of the survey process was to accurately identify eligible employers and employees through initial screening questions. For the Employer Surveys, eligible employers were identified as those either employing individuals without basic skills or Level 2 qualifications or employing individuals in occupations that are associated with low qualifications, at the time of the first interview. For some firms eligibility depended on whether it was defined by identifying low-skilled employees through the qualificationbased or the occupation-based measure. Though combining these two sources of information may limit the effects of misclassification (see for example, the discussion in Battistin and Sianesi, 2006), here we define eligibility by looking at the qualification-based 13
measure, which is the definition closest to the ETP policy, and use the occupation-based measure as a robustness check for our results. For the Employee Surveys eligible employees were identified as those with less than a Level 2 qualification using selfreported information on their educational attainment. To measure training provision detailed information was collected in each survey to gradually narrow down the definition of training to the type provided under the ETP. Training provision at the workplace level was measured by asking employers whether, during the last year, their eligible employees had any off-the-job training which was funded, arranged or supported by the employer, and whether that training led to a basic skills or a Level 2 qualification (specifically an NVQ Level 2).11 The measures of ETP-type training that we consider in the Employee Surveys relate to training in the last three months (to ensure comparability with data from the Labour Force Survey – see below), including whether any training allowed time off from normal duties, was externally provided, or employer-supported, and information on whether any training would lead to, or had led to a qualification, and if so, the type of qualification. The Employer Surveys also collected information on a range of workplace characteristics, including size, industrial sector and whether the workplace is part of larger company. The Employee Surveys collected information on key demographic characteristics and workrelated information of eligible individuals, such as age, marital status, age at which they left full-time education and occupation. We also map local area characteristics, sourced from the UK Office for National Statistics (ONS), into the data to control for further characteristics of the areas in which they are located. Descriptive statistics on employer, employee and local area characteristics are provided in Tables A2 and A3 in the Appendix. Labour Force Survey We supplement the Employee Survey with data from the Labour Force Survey (LFS). The LFS is a quarterly survey of around 60,000 households living at private addresses in Great Britain. It is a five-wave rolling panel, with each household in the LFS interviewed for five successive quarters. We were granted a special release of the LFS containing specific
11
We use training to NVQ Level 2 as the definition of training to a Level 2 qualification. However, our main results are very similar if we categorise employers that did not specify the type of qualification to which their eligible employees were training as in the group of workplaces providing ETP-type training, or if we drop them from the analysis altogether. The Employer Survey training questions are shown in Table A7 in the Appendix.
14
geographic identifiers allowing identification of ETP pilot areas.12 As the ETP Employee Survey only contains data for two pilot areas (as well as the two controls), these additional data allow us to examine the impact of ETP across a much wider range of pilot areas, as well as to select more flexibly areas to be used as controls. Moreover given its ongoing nature, the LFS data allow us to examine the effect of the ETP beyond the first year of operation. We use LFS data from the spring quarters of 2002 through 2004. This is because spring is the only quarter containing information on the size of the employee’s workplace, and on whether their training leads to a qualification.13 In each spring we have a sample of approximately 10,000 eligible individuals (i.e. employed individuals with less than a Level 2 qualification) in England, of whom around 1,500 live in first wave pilot areas, and around 1,000 in second wave pilot areas.14 The sample information available from LFS data is summarised in the last panel of Table 3. Note that in using the LFS, we also add in local area variables (at the LLSC area level) to the data and control for them in the analysis. The LFS variables are listed in Table A4 in the Appendix. We choose control areas from the LFS data by first of all selecting, for each pilot area, nonpilot areas that have the most similar pre-program trends in job-related training. We do this because, as discussed below, our difference-in-differences evaluation methodology relies on the assumption that the pilot and control areas share common trends in training over time. The second criterion we use in choosing control areas is that they are geographically close to each pilot area, again, to minimise any unobserved area differences between pilots and controls that would confound the estimated effects. We list the control areas alongside each pilot area in Table A5 in the Appendix. As a robustness check we also use all nonpilot areas in England as control areas. In all three datasets we define measures of the provision or of take-up of training as onezero variables.
12
This information is not available in the data that are in the public domain.
13
The definitions of training are broader in the LFS than in the Employee Survey (Tables A8 and A9 in the Appendix contain the training questions from both surveys). The latter allows us to define a training variable closer to the actual training subsidised by the ETP. 14
ETP eligibility is based on workplace rather than home address, so we need to assume that region of residence is the same as region of work. We believe this to be reasonable for all pilot areas except one in East London, which we therefore exclude from the analysis.
15
3.2
Evaluation methodology
Our identification strategy follows a difference-in-differences approach which has been increasingly exploited over the years for the evaluation of public interventions. This method compares the change in the take-up of training (whether by employers or employees) in pilot areas from before to after the program, to the change in the take-up of training in control areas, conditional on observable characteristics. This estimator relies on the assumption that, conditional on observable characteristics, the evolution of the provision of training in pilot and control areas would have been the same in the absence of the ETP, i.e. that any difference in training activity between pilot and control areas due to unobserved factors is fixed over time (see Heckman et al. 1997, and Abadie, 2005). Throughout our empirical analysis, we thus assume that the difference of the outcome growth over time in pilot and control areas, conditional on pre-program observable characteristics, identifies the average effect of the ETP program in pilot areas. Because the structure of data collection for employers and employees differed, we take a slightly different approach to estimating the impact of the ETP for these two groups. Our model for the provision of training uses panel data on employers, which allows us to control for workplace-specific time-invariant unobserved characteristics. To do so we model the provision of training using a linear regression model,15 and consider as the outcome variable the growth in training from before to after the program. We estimate ∆y it = β 0 + β 1 Pi + δxi + ε it ,
(1)
where ∆y it is the change in the training provision of employer i between the pre- and postprogram periods (taking values -1, 0 and 1), Pi is one for pilot areas, and zero for control areas, and xi denotes observed characteristics that are pre-determined with respect to the launch of the ETP. Under the assumptions stated above β 1 yields the effect of the program. For the employee analysis, we use repeated cross-section data to estimate the effect of the program on the probability that the employee is undertaking training by estimating the following regression (see Heckman and Robb, 1985) yit = α 0 + α 1 Pi + α 2 1.[t = 2] + α 3 Pi 1.[t = 2] + γxi + υ it ,
(2)
15
We estimate a parametric model due to the gain in efficiency for inferring the causal effects of the policy. However, as discussed in section 4.3 our results are robust to using semi-parametric kernel based matching.
16
where yit takes the value one if employee i is receiving training and zero otherwise; 1.[ ] is an indicator variable that equals one if the condition in parentheses holds, and zero otherwise, and all other variables are as previously defined. The parameter of interest is
α 3 , the effect of the ETP on the take-up of training. It is worth noting that the inferential conclusions on the significance of the estimated causal effects of the ETP may be affected by the presence of serially correlated area-level shocks, as highlighted by Bertrand et al. (2004). Serial correlation in the propensity to provide (or receive) training could be a relevant issue in our application because of persistent area-level shocks, reflecting for example, information flows between nearby employers, or local initiatives by training providers.16 We have given this careful attention in deriving standard errors by estimating robust standard errors that are clustered at the level of the local authority. The number of local authorities within the surveyed LLSC areas varies between 2 and 14, with 8.5 on average.17 We test the validity of the common trends assumption underlying our difference-indifferences estimates by comparing the outcome growth in pilot areas to that of control areas previous to the introduction of the program as follows. Using the Employer Survey data, we estimate equation (1) comparing the outcome growth in the second wave pilot areas to that of control areas between 2002 and 2003, i.e. before the program was implemented in the second wave pilot areas. To do this we use recall information for 2002 as indicated in Table 3. We find that there is no significant difference in the outcome growth between second wave pilots and control areas.18 For the employees’ analysis, we conduct similar pre-program tests using LFS data. For first wave pilot areas we use pre16
Because of the short time series available in our Employer and Employee Survey data and a lack of individual identifiers in the LFS data we have not been able to investigate the degree to which area-level shocks are serially correlated. But we can give an indication of the importance of area-level shocks in explaining the residual variability in equations (1) and (2). Following Bertrand et al. (2004), we specified a pooled regression of the outcome variable on the same controls used in equations (1) and (2) allowing for area-level random shocks. We estimated the proportion of residual variance that is attributed to the area component using data from control areas in both the Employer and Employee Surveys. Its value varies depending on the survey used and the outcome measure considered, but is well above 25% in almost all cases. Given this we cluster standard errors at the area level. 17
We are not able to cluster standard errors in this way for the analysis using the LFS since the appropriate local identifier (local authority) is not available.
18
The coefficient (standard error) on the dummy variable for second wave pilots using the qualification measure is -0.47 (0.57) and using the occupation measure is -0.61 (0.50). Although the outcome growth is not statistically different across second wave pilots and controls the fact that the coefficients are negative suggests that anticipation effects might be present in the second wave pilot areas. If anything, this would bias upwards our estimates of the program effect. We discuss this further in section 4.3.
17
program information from 2001 and 2002 to re-estimate equation (2), with the time and time-area interaction terms covering the period one year before the pilots were introduced; for the second wave areas we use data from 2001, 2002, and 2003 to estimate an augmented version of equation (2) which contains an additional set of time- and time-pilot area interaction terms, covering the period two years prior to the introduction of ETP. Again we find no evidence of significant pre-program effects in any of the first or second wave pilot areas. Two additional considerations are worth mentioning. First, our estimation strategy implicitly relies on being able to find employers and employees in pilot and control areas that share the same (or reasonably close) pre-program characteristics, the ‘common support’ assumption (see Heckman et al., 1999). This proved an empirically negligible issue for the analysis, as the characteristics of firms and workers share roughly the same distribution across pilot and control areas for the samples used in estimation. As we discuss in section 4.3 we checked that our results are robust to common support as well as to the parametric specification by implementing the semi-parametric counterpart of equation (1) via propensity score matching. Second, we also conducted a large number of robustness checks on our basic specifications to study robustness to alternative definitions of employer eligibility and alternative control groups. These are outlined in section 4.3.
4
Results
In this section we present our main findings on the early effects of the ETP, followed by a number of robustness exercises including an analysis of second and third year effects for employees using the LFS data. We present separately the findings for the effect of the ETP on the provision of training by eligible employers and the receipt of training by eligible employees. The two sets of results paint the same picture, that there is no evidence that the ETP pilots affect the incidence of training in the first years of their operation. This conclusion is reinforced by all of our robustness checks. 4.1
Descriptive statistics
To our knowledge, there is little published evidence on the characteristics of low-skilled employees that take up formal training, and even less has been documented about the type of employers that provide training to low-skilled employees. Before presenting our estimates of the effect of the ETP program on the take-up of training, we discuss employees’ and employers’ decisions to engage in training, and provide some descriptive 18
statistics to illustrate how the propensity to train (or in the case of employers, to provide training) varies, according to the characteristics of the individual and the firm (see also Lynch and Black, 1998, and see Loewenstein and Speltzer, 1999 for evidence on general and specific training). This analysis also provides information on the control variables we use when estimating the effect of the program. Individuals’ motivations for training will include the expectation of improved employment prospects, higher wages and any benefits (enjoyment) from simply undertaking a training course, versus the costs of doing so. For employed individuals their decision to undertake training may also be heavily influenced by the characteristics of their job and their employer. Indeed in the context of the ETP program, the training decisions of employees and employers will necessarily be interlinked – employee participation will depend on the employer participating in the program. In Table 4 we examine the characteristics of individuals that are engaged in job-related training using LFS data. The table presents the marginal effects from two probit models where each estimates the probability that an individual has received job-related training in the last three months, conditional on the listed individual characteristics (and those of their employer). The first regression covers all employees, while the second is estimated only for low-qualified workers with below Level 2 qualifications, the target group for the ETP and Train to Gain policies. The results confirm the suggestion in Figure 1 that the probability of training is positively related to employees’ existing qualification levels, and mirror findings in Lynch and Black (1998). Amongst the low skilled, the following characteristics are associated with a greater probability of receiving training: being relatively young; working in the public sector, in particular in the health, social work and education sectors; being in a supervisory role; and being relatively new to one’s job (with job tenure less than one year). Those in higher occupational grades are more likely to receive training than those in lower grade occupations, and full-time workers are more likely to receive training than parttimers. Workers in medium and large firms are significantly more likely to receive training than workers in small firms.
19
Table 4: The probability of receiving job-related training in the last 3 months Dependent variable: 1(training)/0(not training)
All employees
Previous highest qualification level (base group Level 4+) Level 3 Trade Level 2 < Level 2 Other None Gender(base group Female) Male Age (base group 19-24) 25-34 35-44 45-54 55-59 60-64
Employees