Census Bureau Awards Cooperative Agreements to Georgetown University and Purdue University

Written by John Abowd, associate director, Research and Methodology Directorate

Today, the U.S. Census Bureau awarded two cooperative agreements to research teams at Georgetown University and Purdue University. These teams of university-based researchers are at the forefront of the emerging field of privacy-preserving data analysis, and their efforts will assist the Census Bureau in ensuring we continue to be a leader in protecting confidential information.

The Georgetown University project will help develop methods for publishing data that satisfy both formal mathematical privacy requirements and legal standards for privacy protection. Their research, combined with ongoing research at the Census Bureau, will provide improvements to existing methods that protect privacy by avoiding the release of any information that would identify an individual or business in public statistics.

The projects complement new initiatives within the Census Bureau to strengthen our disclosure avoidance methods, especially as they apply to the detailed publications that result from our flagship products: the 2020 Census, the American Community Survey, and the 2017 Economic Census.

The team from Georgetown University, led by Kobbi Nissim, includes two of the computer scientists who originally developed the theory of differential privacy — the first privacy-preserving data analysis model — as well as leading researchers from Harvard University who specialize in cryptography and information law. Their work for the Census Bureau will help improve the way we understand and implement our statutory mandate to protect the confidentiality of all respondent information in the Big Data era.

The Purdue University project will investigate methods to improve the usefulness of anonymized data by studying systems where automated techniques perform many of the tasks currently performed directly by data analysts preparing the publication products. These private automated techniques have the potential to produce high-quality publishable data without compromising the privacy of the respondents even inside the Census Bureau. This research is complementary to our ongoing research on methods that strengthen traditional disclosure avoidance techniques.

Chris Clifton, a computer scientist with an extensive research record in data anonymization leads the team from Purdue University. He is a past program director in the National Science Foundation’s Computing and Information Science Directorate. Their research is expected to help the Census Bureau better understand how to preserve the suitability of our data products for their many uses once we adopt modern privacy-preserving anonymization methods better adapted to the Big Data era.

Both awards are three-year collaborative efforts that will provide us with the time to research, test and further refine innovation methods to enhance our assurance of the protection of confidentiality mandated by U.S.C. Title 13.

The Census Bureau’s mission is to serve as the leading source of quality data about the nation’s people and economy. We honor privacy, protect confidentiality, share our expertise globally, and conduct our work openly. These new cooperative agreements provide complementary approaches to innovative methods and procedures for executing the dual statutory mandates in Title 13 U.S.C. — collect data in order to publish statistics and maintain the confidentiality of respondent information.

Moving forward, the Census Bureau intends to use Cooperative Agreement Authority to enter into partnership with leading experts in order to produce innovative work and to ensure that we remain the leading source of quality data about the nation’s people and economy. We will use this important tool to engage with leading experts in academia, researchers and nonprofit agencies. Our goal is to find the best sources of data, the best methods to analyze these data, and the best tools to provide data to the public.

Posted in Uncategorized | 1 Comment

Research on Plant Dynamics in the Manufacturing Sector

By Lucia Foster and Scott Ohlmacher, Center for Economic Studies

Why do some manufacturing plants grow and thrive while others falter? Economists using U.S. Census Bureau plant-level microdata have approached this complex question from at least three different angles. First, they looked at microeconomic patterns at the plant level. Second, they examined the growth, survival and exit of manufacturing plants throughout the business cycle. Finally, they documented the long-term, secular trends in manufacturing.

Each of these is the subject of recent papers by researchers using Census Bureau microdata from the Annual Survey of Manufactures (ASM), the Census of Manufactures (CM), and the Longitudinal Business Database (LBD). While this blog highlights these papers, extensive literature on this subject using Census Bureau microdata exists (many of these papers can be found in the CES Working Paper series.)

Turning first to the microeconomic patterns, the growth, survival and exit of manufacturing plants depends upon their profitability. Profitability is influenced by many factors internal and external to the plant. One important component of this profitability is the plant’s productivity. Empirical evidence using Census Bureau plant-level data reveals that there are differences in productivity in manufacturing plants even within the same narrowly defined industries. Differences in location and production technology are two possible reasons productivity at manufacturing plants vary, even within the same industry.

The Census Bureau collects information on plant characteristics through the ASM and the CM. However, even when we set controls for relevant characteristics, important differences in productivity remain. According to the productivity literature, Syverson (2011) notes that when using the same measured inputs, a manufacturing plant at the upper end of productivity distribution is able to produce almost twice as much output as a manufacturing plant in the same industry at the lower end of productivity distribution. In discussing possible reasons for these differences, Syverson comments that managers have long been thought to be an important factor, but without data, their importance has been speculative.

The Management and Organizational Practices Survey (MOPS), a supplement to the ASM, is intended to partly fill the data gap by collecting information on these practices. Evidence from the MOPS suggests that management practices are correlated with productivity in manufacturing plants. Bloom et al. (2013) find that “structured” management practices related to monitoring, targeting and incentives are tightly linked to better performance (including higher productivity). These “structured” practices include monitoring a large number of high-frequency key performance indicators (KPIs), setting realistic production targets, making sure that all levels of the organization at the plant are aware of KPIs and targets, and setting bonus, promotion and dismissal incentives based on those targets. While structured management practices are associated with positive outcomes, many plants do not adopt these practices. Researchers are now looking into why there are differences in management practices at plants even within the same firm.

Brynjolfsson and McEhleran (2016) find that the adoption of intensive data-driven decision-making and an increased allocation of decision-making to front-line production workers (versus manager-centric decision-making) is associated with large gains in productivity for plants in industries that are generally capital intensive and utilize “continuous-flow” operations.

The research above focuses on the supply side of profitability, but the demand side is also important. The challenge here is that the Census Bureau does not collect microlevel information on prices. However, researchers have been able to create proxies for the demand side in a limited sample of manufacturing plants for which the Census Bureau collects both revenue and physical output. Using this sample, Foster, Haltiwanger and Syverson (2016) show that much of the growth of plants is dependent on the demand side. They find that new manufacturing plants have higher physical productivity and lower revenue productivity compared to their more mature counterparts, reflecting that new plants set prices low in order to build up their market and grow.

In terms of business cycles, Foster, Grim and Haltiwanger (2016) examine the growth, survival and exit dynamics of manufacturing plants during recent cycles. Regardless of the overall economic conditions, it is generally true that plants that are more productive grow and thrive, while lower productivity plants shrink and exit. In most recent business cycle downturns, this process of reallocation from less productive plants to more productive plants is accelerated. However, they find that in the Great Recession, this reallocation of economic activity from least productive to more productive weakened relative to other downturns. Since this was especially pronounced for young plants, they hypothesize that credit constraints impacted the reallocation. Researchers are also using the MOPS to look at the management and organizational characteristics of manufacturing plants that are able to better weather business cycles.

Finally, researchers have used Census Bureau microdata to better understand long-term trends in manufacturing. Using plant-level data is critical to understanding these trends due to changes in industry classification schemes. Without controlling for these changes, it is unclear what is due to changes in underlying economics activity at plants versus changes in classification.

Pierce and Schott (2016) focus on the decline in U.S. manufacturing employment from 2000 to 2007. They examine the link of this decline to China’s accession to the World Trade Organization (WTO) in 2001. By using the Longitudinal Business Database (a Census Bureau research dataset) and the CM, they examine the response of manufacturing plants while controlling for changes in classification. In addition to changes at the external margin (including within firm relocation of production outside the United States), they find evidence of capital deepening of U.S. manufacturing plants that continued in operation during this period.

One topic that figures in Pierce and Schott’s work is the impact of uncertainty on manufacturing plant’s decisions. They cite anecdotal evidence that uncertainty concerning China’s trade status leading up to its accession to WTO impacted manufacturing plant’s planning decisions. The second wave of the MOPS, which is currently in collection, includes a section on uncertainty. We look forward to research using the MOPS that will enable us to better understand the impact of uncertainty on manufacturing plants’ growth, exit and survival.

Posted in Uncategorized | Leave a comment

Challenges Facing the Disclosure Review Board

Written by: William Wisniewski, Center for Disclosure Avoidance Research

At the U.S. Census Bureau, the Disclosure Review Board is best known as the team that establishes and reviews official Census Bureau disclosure avoidance policies for the public release of data products that do not reveal any information about the survey respondents. Yet, the boards’ members also serve other important and lesser known roles. For example, they work with researchers in the Center for Disclosure Avoidance Research to determine the effectiveness of current disclosure avoidance techniques in protecting data products. In addition, these researchers study and develop new techniques that may be applied to future releases of data products.

This work is critical in meeting the guidelines established under Title 13 and Title 26 of the U.S. Code, which states that the Census Bureau is required to protect the confidentiality of individual respondents when it releases data to the public .

This seemly simple mission can often pose challenges. For example, what occurs if a researcher wants to release counts and demographic characteristics of individuals in every county in the United States? What if a researcher wants to release an infinite number of variables in a Public Use File? What should a researcher do if they encounter small cell sizes within their data product?

These types of questions and others, along with their solutions, will be presented in a topic-contributed session at the 2016 Joint Statistical Meetings on Wednesday, August 3, 2016, titled ”Innovations in Disclosure Avoidance at the U.S. Census Bureau.” We explain specific issues and walk through some of the methods and techniques that are used to ensure the Disclosure Review Board meets its mission. That is, to support the Data Stewardship Executive Policy Committee in its efforts to ensure that the Census Bureau protects all Title 13 and Title 26 respondent confidentiality of publicly released data products.

Looking to the future, the Disclosure Review Board will also continue to face other challenges. It is likely that Census Bureau and other researchers will need to develop, test, and apply new methodologies and techniques to Census Bureau data, particularly as the quantity of potentially linkable data outside of the Census Bureau increases.

 

Posted in Uncategorized | 2 Comments

Evaluating Possible Administrative Records Uses for the Decennial Census

Written by: Andrew Keller and Scott Konicki

When a household does not respond to the census, the U.S. Census Bureau must send a field worker to that address to complete a nonresponse follow-up interview. For the 2010 Census, 72 percent of American households mailed back a completed census form. The remaining 28 percent that did not respond by mail were counted via a census taker that visited their address. In-person interviews are much more costly than getting a response back in the mail. For the 2020 Census, the Census Bureau is researching the possible use of administrative records to provide a status and count for some addresses in the nonresponse follow-up universe—that is, to indicate whether the housing unit is likely to be occupied or vacant, and how many people may live in it. As outlined below, this information will aid in reducing the number of contacts during the nonresponse follow-up operation.

Over the last four years, the Census Bureau has tested various methods using administrative records to reduce the nonresponse follow-up workload. All tests used administrative records modeling with varying levels of complexity. In the tests, the administrative records allow us to split the nonresponse follow-up address universe into three categories: (1) units identified as administrative records occupied, (2) units identified as administrative records vacant, and (3) addresses identified as no determination.

The figure below shows the flowchart of the contact strategy related to administrative records cases for the nonresponse follow-up operation specific to the 2016 Census Test. When administrative records indicated that an addresses was vacant, it received no in-person visits during the nonresponse follow-up operation.

Keller

Addresses that the administrative records indicated to be occupied received only one visit in the 2016 Census Test. All units in the nonresponse follow-up address universe, whether the administrative records indicated they were vacant or occupied, did receive an additional postcard by mail during the nonresponse follow-up operation. The postcard told people at these addresses how to self-respond by filling out the questionnaire online or by responding through the questionnaire assistance line. In short, there are several ways before and during nonresponse follow-up that the Census Bureau is attempting to obtain and use self-responses before using administrative records determinations.

The development of possible administrative records models has been guided by comparing models retrospectively against 2010 Census results. Doing so provides a national evaluation of potential administrative records models. However, a difficulty underlying the evaluation of administrative records modeling usage is handling concerns such as undercounts and erroneous enumerations. Although the analysis using the 2010 Census results provides a solid basis for assessing model performance, it is not the only way to measure it.

To learn more about Nonresponse Follow-Up Contact Strategy for Administrative Record Cases, please join us at the Joint Statistical Meetings.

Posted in Uncategorized | 2 Comments

Researching Methods for Scraping Government Tax Revenue From the Web

Written by: Brian Dumbacher, Mathematical Statistician, Economic Statistical Methods Division, and Cavan Capps, Big Data Lead, Associate Directorate for Research and Methodology

The Quarterly Summary of State and Local Government Tax Revenue is a sample survey conducted by the U.S. Census Bureau that collects data on tax revenue collections from state and local governments. Much of the data are publicly available on government websites. In fact, instead of responding via questionnaire, some respondents direct survey analysts to their websites to obtain the data. Going directly to websites for those data can reduce respondent burden and aid data review.

It would be useful to have a tool that automatically collects, or scrapes, relevant data from the web. Developing such a tool can be challenging. There are thousands of government websites but very little standardization in terms of structure and publications. A large majority of government publications are in Portable Document Format (PDF), a file type not easily analyzed. Finally, both web and PDF documents have constantly changing formats.

To solve this problem, researchers at the Census Bureau are studying and applying methods for unstructured data, text analytics and machine learning. These methods belong to the realm of “Big Data.” Big Data refers to large and frequently generated datasets representing a variety of structures. As opposed to designed survey data, Big Data are “found” or “organic” data. Typically, these data are created for a click log, a social media blog or an online PDF report, but are innovatively repurposed and used for something else such as inferring behavior. Since the data were not specifically designed to infer, they often have unique challenges.

The goal of this research is to develop a web crawler with machine learning that performs three tasks:

  1. Crawls through a government website and discovers all PDFs.
  2. Classifies each PDF as containing relevant data on tax revenue collections.
  3. Extracts the relevant data, organizes it and stores it in a database.

For task 1, we used the open-source software called Apache Nutch. In a production environment, the process will scale up by distributing the work over many computers and then combining the results.

For task 2, we developed a technique to convert PDF documents to text and re-organize the output. A classifying model applied to the converted PDF determines if the document has relevant data on tax revenue collections. This model uses the occurrence of key sequences of words such as “statistical report” and “sales tax income” and other text analysis techniques.

For task 3, we are considering various ideas. Relevant data would probably be found in tables and in close proximity to key sequences of words. We will explore table identification methods based on the distribution of terminology in the PDF and additional modeling that maps the nonstandard data in PDFs to standard definitions in Census Bureau publications.

The Census Bureau looks forward to continuing this web scraping research and exploring new machine learning algorithms that reduce respondent burden, speed survey processing and improve data collection.

To learn more about the research methods for scraping government tax revenue from the web, please join us at the Joint Statistical Meetings on August 2, 2016.

Posted in Uncategorized | 3 Comments

Reducing Respondent Burden in Counting Juveniles

Written by: Suzanne Marie Dorinski, Economic Statistical Methods Division

The U.S. Census Bureau conducts the Census of Juveniles in Residential Placement every other year for the Office of Juvenile Justice and Delinquency Prevention. This survey collects data from almost 2,400 public and private juvenile facilities that hold juveniles charged or adjudicated for a delinquency or status offense to provide a count of juveniles in publicly and privately run juvenile correctional facilities.

The data collection has two parts: (1) questions about the facility and (2) questions about each charged or adjudicated juvenile held in the facility.

For each juvenile, we ask the following:

  • Gender.
  • Date of birth.
  • Race.
  • Who placed the juvenile in the facility.
  • Most serious offense.
  • State or territory where offense was committed.
  • Adjudication status.
  • Admission date.

Facilities have the option of responding by mail, through the internet or by fax. Those that respond online can enter the data for each juvenile or they can upload a data file. For the 2013 collection, we suggested that larger facilities should upload a data file but did not define how big a larger facility is.

Our online data collection tool collects paradata for each response. The paradata file captures the values that the facility enters, as well as any changes that the facility makes, and keeps track of the edit messages that the facility sees while reporting their data. Each action has an associated time stamp, so we can tell how long each facility spends online to report their data.

The graphic below shows that as the number of juvenile records entered online increased, the amount of time spent in the data collection tool increased. To reduce the burden on the juvenile facilities, we could include this graphic in the next data collection and suggest that facilities with 50 or more juvenile records upload a data file instead of spending hours entering that data in the data collection tool. Knowing this information is essential to helping us make responding to the survey easier for staff at the juvenile facilities.

Dorinski 2

We have also shared these results with the Office of Juvenile Justice and Delinquency Prevention, and they plan to use it in the future to adjust their estimation of respondent burden hours that they report to the Office of Management and Budget each year.

I will provide more suggestions for reducing respondent burden for juvenile residential facilities at the 2016 Joint Statistical Meetings and in the conference proceedings.

Posted in Uncategorized | Tagged , , , | Leave a comment

Estimating the Reliability of Product Sales Totals in the Economic Census

Written by: Katherine Jenny Thompson, Complex Survey Methods and Analysis Group; Matthew Thompson, Business Register and MEPS Statistical Methods Branch; and Roberta Kurec, Economic Census and Related Surveys Statistical Methods Branch, Economic Statistical Methods Division

The economic census is the U.S. Census Bureau’s official five-year measure of American business and the economy. It provides industry and geographic detail not typically available from other economic statistics sources benefitting businesses, policymakers and the American public.

The term “census” in this case is actually a slight misnomer. The Census Bureau requests data from most large businesses and a sample of small businesses. We ask each of these businesses to provide data on sales, shipments, and receipts or revenues for each of its establishments (i.e. for each single physical location)—as shown in Figure 1.

Thompson Fig 1

We also ask for the revenues obtained by each establishment from the types of products likely to be produced or sold based on its primary industry. Product statistics are needed by the Bureau of Economic Analysis to benchmark the national accounts, as well as by the Bureau of Labor Statistics in constructing producer price indexes. The North American Product Classification System defines over 8,000 different products that can be reported across the entire census.

As an example, Figure 2 provides a short extract from the product collection for establishments in the “Automobile Dealers” retail trade industry from the 2012 Economic Census. Notice that, on the surface, these products don’t seem to be related to automobile dealers, but they are products that could be found at automobile dealerships, and that is why they are included on the questionnaire. The product list for some establishments can span more than 50 potential products. Additionally, for certain industries Census designates “must-have” products. For example, an automobile dealer should report revenue from automobile sales.

Thompson Fig 2

In most industries, only a few products are frequently reported and many sampled establishments do not report any data on products. This makes it difficult to produce good product statistics and measures of reliability.

For the past two years, the Census Bureau has conducted extensive research into product statistics. Initial research by the team focused on determining a single missing data treatment method for products in the 2017 Economic Census. The research, presented in a topic contributed session entitled “Evaluating Alternative Imputation Methods for Economic Census Products: The Cook-Off” was reported at the 2015 Joint Statistical Meetings.

This year, we have been exploring how to estimate the variance for product sales. Besides the sampling, imputation and post-stratification components, there are additional challenges caused by the lack of good predictors and high expected zero rates for many products, compounded by the high product nonresponse rates. We believe that it is possible to find a variance estimator with good statistical properties for the well-reported products, but we remain concerned about the others. So far, the team has conducted two separate simulation studies that investigate the possibility of finding a variance estimator that performs well on many different products considering only (1) sampling variance and post-stratification, and (2) product nonresponse and hot deck imputation. We will share these results on August 1, 2016, at the JSM. The next phase of our research will combine the findings from the two separate studies to develop a single variance estimator for products.

Posted in Uncategorized | Tagged , | Leave a comment

Update on the Current Population Survey Research

Written by: Stephanie Chan-Yang, Yang Cheng and Aaron Gilary, U.S. Census Bureau

The U.S. Census Bureau’s Current Population Survey is one of the oldest and largest household surveys in the United States. Since 1940, it has produced monthly statistics on labor force information. The Current Population Survey interviews about 72,000 households each month to estimate totals of persons unemployed, employed and not in the labor force, leading to the official estimate of the national unemployment rate.

The Current Population Survey applies a stratified two-stage cluster sampling design to select a representative sample of U.S. households. A housing unit selected for the sample is interviewed for four consecutive months, rotated out for eight months, and then interviewed for another four months. This approach aims to develop overall monthly estimates while also tracking monthly and annual changes among the sampled households.

To manage these design features, the survey team has long relied on cutting-edge research on sampling, weighting and variance estimation. Given several key variables that are beyond our control such as budgets, computational power and stakeholders’ needs, the survey team must be sufficiently agile to adapt to changes. This nimble approach requires a strong understanding of the underlying theory, an ability to adapt the survey quickly, and an opportunity to hone our methods under peer review.

Research Presented at the Joint Statistical Meetings

The survey team will give three presentations in the session “Update on the Current Population Survey Research” at this year’s Joint Statistical Meetings in August 2016.

  • Stephanie Chan-Yang will speak about the sample size for the Current Population Survey. She will also explain the sample size and allocation in relation to the Bureau of Labor Statistics sample design requirements for accuracy. Chan-Yang will further describe the Children’s Health Insurance Program expansion to the survey sample. This expansion increases the survey sample size in order to provide better estimates of low-income children without health insurance. These data feed into the Current Population Survey Annual Social and Economic Supplement. Finally, Chan-Yang’s presentation will explore recent research on reducing the sample size and budget constraints.
  • Yang Cheng will explore a new method to improve our composite estimates. In his research, he proposed an iterative version of our composite estimator (known as the AK composite estimator) for the Current Population Survey. This new method includes the current AK composite estimator as a special case. In addition, the proposed method will reduce the mean squared error of the AK composite estimator when we choose the optimal estimator in this general family. Finally, Cheng will demonstrate the proposed method via comprehensive numerical studies.
  • Aaron Gilary will give an overview of Current Population Survey variance methodology. This talk discusses current survey methods of calculating variances, with a focus on the Balanced Repeated Replication method. This method is used to construct a variance estimate by resampling the data using replicate factors. The talk highlights the components of the variance estimate that come from the survey sample design, and the different variance measures that the survey produces. We will conclude the presentation with ideas for improvement in the future.

To learn more about Current Population Survey methodologies and researches, please join us at the Joint Statistical Meetings on August 3, 2016, or contact us at: <stephanie.chan.yang@census.gov>, <yang.cheng@census.gov>, or <aaron.j.gilary@census.gov>.

 

Posted in Uncategorized | Tagged , , , | Leave a comment

Visit Us at the 2016 Joint Statistical Meetings in Chicago

On July 30, U.S. Census Bureau staff will join several thousand statisticians and experts in related professions to present testing and research results on many topics at the Joint Statistical Meetings in Chicago, Ill. Presented annually by the American Statistical Association, this year’s Joint Statistical Meetings will take place from July 30 to Aug. 4. The theme of this year’s conference is “The Extraordinary Power of Statistics.”

Attendees will present and hear about advances in statistical methodology and applications, including statistical theory and methodological development, state-of-the-art technological advances for data processing, and new advances in statistical sampling, estimation, and modeling.

Census Bureau experts will present on a spectrum of topics, including:

  • New machine learning research for collecting data from the web.
  • Estimating reliability in the economic census.
  • The treatment of imputed earnings.
  • Reducing respondent burden.
  • The Current Population Survey.

The Joint Statistical Meetings offer a unique international forum for Census Bureau staff to present their research for professional discussion. It is a major setting for ensuring that the Census Bureau’s statistical methodology remains at the cutting edge. We look forward to sharing our ideas at this year’s conference. For a complete listing of Census Bureau research presentations, see <http://www.census.gov/research/conferences/jsm/2016.html>.

 

Posted in Uncategorized | Tagged | Leave a comment

Do Refund Anticipation Products Help or Harm American Taxpayers?

By: Maggie R. Jones, Center for Administrative Records Research and Applications

Many taxpayers rely on for-profit tax preparation services to file their income taxes. To make tax filing more appealing to taxpayers, preparers offer financial products that speed up the delivery of refunds. However, recent U.S. Census Bureau research suggests that these products may make families less financially secure.

“A Loan by any Other Name: How State Policies Changed Advanced Tax Refund Payments” examines the impact on taxpayers of state-level regulation of refund anticipation loans (RALs). Both refund anticipation loans and refund anticipation checks (RACs) are products offered by tax preparers that provide taxpayers with an earlier refund (in the case of a refund anticipation loan) or a temporary bank account from which tax preparation fees can be deducted (in the case of a refund anticipation check). Each product comes at the cost of high interest rates (often an annual rate of more than 100 percent) and fees and is very costly when compared to the value of the refund.

States have responded to the predatory nature of refund anticipation loans through regulation. The working paper specifically looks at how the implementation of New Jersey’s interest rate cap in 2008 (no more than 60 percent annual rate) of RALs affected taxpayers. Evidence suggests that the use of refund anticipation products among taxpayers living in ZIP codes near New Jersey’s border with another state increased after the policy changed. In other words, New Jersey’s regulation appears to have suppressed the volume of refund anticipation products offered within the state, with taxpayers near the border crossing into a bordering state to use the products.

Meanwhile, border taxpayers’ use of key social programs such as the Supplemental Nutrition Assistance Program, Temporary Assistance to Needy Families and Supplemental Social Security also increased. In other words, after the change in policy, use of both refund anticipation products and social programs increased for taxpayers in New Jersey border ZIP codes compared with other families, indicating greater hardship. The map below shows the ZIP codes used in the analysis.

maggie

At one time, the Internal Revenue Service informed preparers if there was an offset on a taxpayer’s refund. Under pressure from consumer advocates, the IRS stopped providing the indicator in 2010. By 2012, all of the major tax preparation companies in the industry had withdrawn from the RAL market, turning to RACs as a replacement. Consumers paid a minimum of $648 million in RAC fees in 2014. The maps show the withdrawal from the RAL market and the increase in the RAC market between 2005 and 2012.

Maggie 2

Refund anticipation products pose important questions for policymakers. In order for people filing taxes to receive higher refunds, tax preparers file additional forms that include claims for credits and deductions, which therefore increase tax preparation costs. This translates to higher charges for low-income taxpayers who are eligible for these credits and deductions. Moreover, preparers target RALs and RACs to low-income taxpayers who expect substantial refunds through redistributive credits such as the Earned Income Tax Credit, arguing that RALs and RACs speed up refund receipt and help taxpayers pay off more pressing debt or bills quicker, making low-income families better off. However, some portion of this refund money goes directly from the tax and transfer system to tax preparers rather than intended recipients.

Posted in Uncategorized | Leave a comment