**Moderator:** bullock

7 posts
• Page **1** of **1**

OCTA has developed a new MOE called Corridor Synchronization Performance Index or CSPI. I correlates Average Speed, Stops per Mile, and number of intersections made on a green vs. those stopped by red on Travel Time Runs for a more practical measurement tool. The index is average speed in miles per hour from a low of 15 and a score of 8 to a high of 34 and a maximum score of 36; Green/Red with a low of 1.0 with a score of 8 to 5.0 with a maximum score of 40; and Stops/Mile with a low end of 2.3 stops/mile and a score of 17 to a high end of 0.7 stops/mile and a score of 33. Sum of the three scores gives a total CSPI. A CSPI of 70 and above is considered job well done. A CSPI of below 70 indicates work to be done, and below 50 indicates timing needs total revamp and possible mitigation of choke points or other impediments. Greg is putting this MOE into the latest version of Tru Traffic and we would like feedback.

- Rondog39_TSOS
**Posts:**2**Joined:**Mon Jan 31, 2011 9:12 pm

After some discussions, one other user and I have separately created some user-defined columns to add OCTA's CSPI to the TT&D report. We're comparing notes to ensure that our calculations agree and are correct. Once we're satisfied, I'll post a file with the set of formulas on the downloads page so others may install them, form opinions, and give feedback.

Regards.

Greg

Regards.

Greg

- bullock
- Site Admin
**Posts:**200**Joined:**Thu May 06, 2004 6:51 pm**Location:**Pacific Grove, CA

I'll offer a few comments.

Comment I.

One consideration is how to compute the average CSPI for a collection of travel time runs. The calculation could either

Tru-Traffic can do it either way, but the user-defined column must specify which method to use.

Does OCTA have a preference?

If OCTA and no one else has a policy or an argument for using method 1 or 2, then I personally lean toward method 2 on argument that it will be the more sensitive to the input, as it postpones discarding the out-of-bounds information until the very end, and for the intermediate calculations, it preserves and can still use the out-of-bounds information.

Comment II.

A second comment is that I think the label "Green/Red" is confusing and even misleading. It seems to suggest something to do with the green time and the red time of the signal cycle. As I understand it, it instead refers to the number of links without stops vs. the number of links that include at least stop. I think a label something like "no-stop/stop signals" or "go/stop" or "unstopped/stopped" would be more clear.

Regards.

Greg

Comment I.

One consideration is how to compute the average CSPI for a collection of travel time runs. The calculation could either

- Compute the CSPI for each travel time run, then average all the individual CSPIs together, or
- Compute the average Speed, Green/Red, and Stops/Mile for the set of travel time runs, then enter those averages into the CSPI formula.

Tru-Traffic can do it either way, but the user-defined column must specify which method to use.

Does OCTA have a preference?

If OCTA and no one else has a policy or an argument for using method 1 or 2, then I personally lean toward method 2 on argument that it will be the more sensitive to the input, as it postpones discarding the out-of-bounds information until the very end, and for the intermediate calculations, it preserves and can still use the out-of-bounds information.

Comment II.

A second comment is that I think the label "Green/Red" is confusing and even misleading. It seems to suggest something to do with the green time and the red time of the signal cycle. As I understand it, it instead refers to the number of links without stops vs. the number of links that include at least stop. I think a label something like "no-stop/stop signals" or "go/stop" or "unstopped/stopped" would be more clear.

Regards.

Greg

- bullock
- Site Admin
**Posts:**200**Joined:**Thu May 06, 2004 6:51 pm**Location:**Pacific Grove, CA

I believe Method 2 would be most accurate.

In Method 1, the total CSPI score of each run could vary depending on each MOE and the min and max limits.

In Method 2, the averaging of all runs is completed to accurately estimate the average MOE. Then the CSPI is calculated from that to derive the corridor score.

For example, if runs 1, 2, 3, 4 had speeds (mph) of 35, 20, 5, and 10. The CSPI scores would be 36, 15, 8, and 8 .

Method 1 would have an average speed CSPI of 16.75.

Method 2 would have an average speed of 17.5, and a corresponding average speed CSPI of 11.

In Method 1, the total CSPI score of each run could vary depending on each MOE and the min and max limits.

In Method 2, the averaging of all runs is completed to accurately estimate the average MOE. Then the CSPI is calculated from that to derive the corridor score.

For example, if runs 1, 2, 3, 4 had speeds (mph) of 35, 20, 5, and 10. The CSPI scores would be 36, 15, 8, and 8 .

Method 1 would have an average speed CSPI of 16.75.

Method 2 would have an average speed of 17.5, and a corresponding average speed CSPI of 11.

- Jmy
**Posts:**1**Joined:**Fri Feb 04, 2011 5:43 pm

That's a good example.

Here's another example showing the different sensitivities in the two averaging methods due to their discarding information early or late in the calculation: Suppose we have two cases in which we're averaging together just two travel time runs each.

In Case A, suppose

In Case B, suppose

Given the asymmetry here, we expect Case A to have a lower speed-score contribution to the CSPI than Case B.

Using averaging Method 1, these two cases have exactly the same speed-score contribution: In each case, the first run has a speed-score of 36 (the maximum possible) and the second run has a speed score of 8 (the minimum possible), so the average speed-score is (36+8)/2 = 22 for either case using averaging Method 1.

Using averaging Method 2, the asymmetry of these two cases is exposed. Case A has an average speed of (34+5)/2 = 19.5 mph, giving a speed-score of 14.6, while Case B has an average speed of (45+15)/2 = 30 mph, giving a speed-score of 30.1, so the speed-score using averaging Method 2 reflects the differences between the two cases.

In summary, the Average CSPIs are

Regards.

Greg

Here's another example showing the different sensitivities in the two averaging methods due to their discarding information early or late in the calculation: Suppose we have two cases in which we're averaging together just two travel time runs each.

In Case A, suppose

- the first run has an average speed of 34 mph -- just at the top-score speed in the CSPI defintion, and
- the second run has an average speed of just 5 mph -- well below the bottom-score speed in the CSPI definition

In Case B, suppose

- the first run has an average speed of 45 mph -- well over the top-score speed in the CSPI defintion, and
- the second run has an average speed of 15 mph -- just at the bottom-score speed in the CSPI definition

Given the asymmetry here, we expect Case A to have a lower speed-score contribution to the CSPI than Case B.

Using averaging Method 1, these two cases have exactly the same speed-score contribution: In each case, the first run has a speed-score of 36 (the maximum possible) and the second run has a speed score of 8 (the minimum possible), so the average speed-score is (36+8)/2 = 22 for either case using averaging Method 1.

Using averaging Method 2, the asymmetry of these two cases is exposed. Case A has an average speed of (34+5)/2 = 19.5 mph, giving a speed-score of 14.6, while Case B has an average speed of (45+15)/2 = 30 mph, giving a speed-score of 30.1, so the speed-score using averaging Method 2 reflects the differences between the two cases.

In summary, the Average CSPIs are

- Code: Select all
`Method 1 Method 2`

Case A: 22 14.6

Case B: 22 30.1

Regards.

Greg

- bullock
- Site Admin
**Posts:**200**Joined:**Thu May 06, 2004 6:51 pm**Location:**Pacific Grove, CA

The choice in averaging method also affects intermediate calculations in the Green/Red contribution to the score. For example, the Red part (which is the number of links that include at least one stop) may be calculated with either formula

or

and with a separate column to accumulate the results along the artery.

But these two formulas have different behavior under Method B averaging. With Method B, the number of stops along a link, averaged over multiple runs, might be a fractional value. The first formula would always return 1 for any fraction (greater than 0, of course), while the second formula would return the actual fraction.

For example, suppose there are two travel time runs, one with 0 stops along a segment, the other with 1 stop. The average # of stops in this case is 0.5, and the first formula returns a Red number of 1, while the second formula returns 0.5. In either case, this value then affects the Green/Red ratio and then the corresponding scores.

So if we use Method B for averaging, then we must also choose between these two, now distinguishable, types of formulas for the Green/Red calculation. I'm leaning toward the second formula using the same argument as before: that it allows the CSPI to be the more sensitive because it postpones discarding information until a later stage in the calculation. Someone else may have an argument for preferring the first formula, or someone may even use this as a reason to prefer the Method A style of averaging. Comments most welcome.

Greg

- Code: Select all
`if(Stops>0,1,0)`

or

- Code: Select all
`min(Stops,1)`

and with a separate column to accumulate the results along the artery.

But these two formulas have different behavior under Method B averaging. With Method B, the number of stops along a link, averaged over multiple runs, might be a fractional value. The first formula would always return 1 for any fraction (greater than 0, of course), while the second formula would return the actual fraction.

For example, suppose there are two travel time runs, one with 0 stops along a segment, the other with 1 stop. The average # of stops in this case is 0.5, and the first formula returns a Red number of 1, while the second formula returns 0.5. In either case, this value then affects the Green/Red ratio and then the corresponding scores.

So if we use Method B for averaging, then we must also choose between these two, now distinguishable, types of formulas for the Green/Red calculation. I'm leaning toward the second formula using the same argument as before: that it allows the CSPI to be the more sensitive because it postpones discarding information until a later stage in the calculation. Someone else may have an argument for preferring the first formula, or someone may even use this as a reason to prefer the Method A style of averaging. Comments most welcome.

Greg

- bullock
- Site Admin
**Posts:**200**Joined:**Thu May 06, 2004 6:51 pm**Location:**Pacific Grove, CA

The Downloads page http://www.tru-traffic.com/downloads.htm has the set of formulas that Jonathan Yee of DKS Associates created and donated, with assistance and editing from me, to add OCTA's CSPI to your TT&D Reports. Thank you, Jonathan!

Greg

Greg

- bullock
- Site Admin
**Posts:**200**Joined:**Thu May 06, 2004 6:51 pm**Location:**Pacific Grove, CA

7 posts
• Page **1** of **1**

Users browsing this forum: No registered users and 1 guest