Setting Up a Language Testing Program for Bilingual Incentive Pay - Parrot

Setting Up a Language Testing Program for Bilingual Incentive Pay

Setting Up a Language Testing Program for Bilingual Incentive Pay

Setting Up a Language Testing Program for Bilingual Incentive Pay

Kym Derriman

Parrot Language Testing

June 2022

A lot of organizations today are waking up to the fact that they need multilingual talent… badly. Many are beginning to create bilingual incentive pay programs. For a lot of businesses though, this is still a new concept. Hopefully this helps you on that journey.

Of course, if you’re paying someone extra for language skills, you have to measure it somehow. You can’t just send out a survey and ask “what languages do you speak,” right?

Using an unbiased, third party language test is the common (and often best) approach. But where do you go from there?

Test what matters

Think about the tasks your employees will do in their roles.

Does their job require them to speak with a certain degree of skill? Measure speaking. A good speaking proficiency test measures interpersonal speaking, meaning, their ability to have a real conversation.

Are you hiring people who will mainly be communicating via email or chat? Measure writing!

The two skills above, speaking and writing, will cover 99% of professional needs. These are the productive skills.

The receptive skills (listening and reading), will almost always measure higher than the productive skills, so in most cases testing them is redundant.

You only really need to test receptive skills if you’re teaching language students and want to measure progress — or if you run a super secret spy agency and your agents are out there somewhere just… listening… and reading… creepy.

Know how language proficiency is measured

Language skills in the U.S. are measured using a standard scale created by the government. For any history nerds like me, the ILR scale was created after WWII and Korea because we realized that, as a country, we kinda suck at language skills – no good for fighting wars on foreign soil. Anyways… no more history, I promise.

The ILR scale is based on real-world language skills. It measures what people can actually do with their language skills on the job, not just chatting with grandma. After all, the government uses this scale to rate the ability of diplomats and spies and badass special ops dudes, so they have to make sure these guys have the skills needed to survive.

Person Making Different Types of Language Testing

Use the right test

Of course, I’m super biased because I know that Parrot is by far the best test out there, but if you absolutely feel the urge to shop around, at least make sure whatever test you use meets the criteria below.

  • Human Rated

Sorry to all of you computer scored tests out there, but you’re just not there yet. No one who knows their stuff uses computers to measure language proficiency. All of the PdD’s and Fortune 100 business leaders I’ve talked to agree that we’ve got a long way to go before computers can do this reliably.

  • Valid

If the vendor (language testing company) doesn’t trust their test enough to publish research, you shouldn’t either. Think about it – if you had a study showing that your test works, wouldn’t you want to share it with anyone who asks? Definitely ask.

  • Reliable

Multiple raters – simple. Ask the vendor how many people rate each test. Single-rated tests have been found to be highly unreliable, especially in the middle range of language proficiency… often no better than a coin flip. Multiple raters increase reliability tremendously. Insist on it.

Language Testing Validity Report

Set passing scores – how good is good enough?

Some of the common questions I hear include:

  • What are other companies/agencies doing… Can I just copy it?
  • How is language skill even measured?
  • What’s the standard process for choosing a passing score?

Your best bet, legally and otherwise, is to use a valid, time tested process. There are different approaches you can take, these are the most common.

  1. Benchmarking. Essentially, testing people that are currently doing the job adequately and use their scores as a guide.
  2. Task Analysis, a.k.a. Have the professionals handle it. They’ll work with language experts and subject matter experts from your organization to create a document that recommends passing scores based on their analysis.
  3. Interview Subject Matter Experts. This is the DIY version of a task analysis. Less expensive, but like any DIY project, you gotta put in the elbow grease (or better yet, delegate!)

The most important thing is that you choose a method, document the process, and stay consistent. If you want a more in-depth guide, check out our brand new video series on Setting Passing Scores! They’re super short crash courses (2-6 minutes, 4 videos, no biggie).

Follow these guidelines and you’ll be on your way to creating an effective, legally defensible and fair language testing program that you and your organization will benefit from.

Happy Testing!

Want to know more?