Trying (and Failing) to Speed Up String.startsWith

Brian Hicks, March 1, 2021

When I was optimizing elm-csv, I noticed that Elm's String.startsWith is implemented using JavaScript's indexOf method. It looks something like this:

function startsWith(haystack, needle) {
  return haystack.indexOf(needle) === 0

This strikes me as inefficient! Doesn't that mean that if haystack doesn't begin with needle, it'll still look through the whole string? That seems to waste a lot of cycles on work after we already know the result.

After thinking about it for a little bit, I imagined a faster way:

function startsWith(haystack, needle) {
  return haystack.slice(0, needle.length) === needle

But, I didn't know for sure if it would be faster, so I started measuring things. After a few benchmarks things were looking promising! I found that String.prototype.slice and String.prototype.length operate in more-or-less constant time, and that ===, while not constant-time, is heavily optimized.

Hypothetically, I thought, this means that a slice-based implementation of startsWith could run much quicker than scanning through the whole string with indexOf.

Chrome Uses Actual Magic in Their Strings

But just to make sure my intuition about indexOf being slow was correct, I decided to benchmark that as well. Specifically, I decided to benchmark how long browsers took to find a single character in a string that doesn't contain the character:


I expected that every time I multiplied size by 10, I should see roughly a 10x slowdown in operations per second. But then…

Browser1 "abc"10 "abc"s100 "abc"s1,000 "abc"s
Chrome 88.0.4324.182866,026,834869,256,882858,563,862865,370,315
Firefox 85.043,557,27544,947,17441,059,919 (-8.65%)16,831,928 (-59.01%)
Safari 14.0.149,240,58134,661,639 (-29.61%)7,162,615 (-79.01%)877,449 (-87.75%)

Chrome has… NO REAL SLOWDOWN? WHAT? This data suggests that Chrome can find a substring in any length string in constant time. How is that even possible?

At this point, I felt pretty out of my depth, and decided I needed to ask my friend Luke Westby, Real-Life Computer Genius™, what was going on. He knew right where to look and figured out that Chrome has a special optimization for single-character search strings in indexOf. In other words, people have hand-tuned V8 over the years to go ridiculously fast in all sorts of situations, and this is one of them.

Well, OK, let's try finding some multi-character string instead. I used "nope", which doesn't trigger this optimization, but still fails to match "abc". Does that give us a result that matches my intuition?

Browser1 "abc"10 "abc"s100 "abc"s1,000 "abc"s
Chrome 88.0.4324.182885,211,831863,845,523 (-2.41%)849,821,335 (-1.62%)837,391,727 (-1.46%)
Firefox 85.058,648,71644,107,472 (-24.79%)40,529,513 (-8.11%)16,768,019 (-58.63%)
Safari 14.0.1102,071,44526,242,114 (-74.29%)4,212,339 (-83.95%)467,329 (-88.91%)

That's kinda like what I expected, but Chrome is still incredibly fast at this! Well, OK, that's fine. Good for them. But at this point I got curious (and a little devious): how can I defeat Chrome's optimizations? What is the worst-case scenario that they can't possibly have handled already?

I tried to imagine the algorithm that you'd have to write to implement indexOf and came up with something like this:

  1. Look at the first character in haystack.
  2. If it matches the first character in needle, look at the next pairs of characters in the two strings one by one.
  3. If you get to the end of needle and they all match, you've found the index.
  4. But if one pair doesn't match, you've got to start over at the next character in haystack.

With that in mind, what's combination of needle and haystack would perform the worst? I think it's any pair of strings where haystack is all but the last character of needle repeated over and over. So, maybe a benchmark like this?


That can never succeed, but it'll make the browser do a lot of extra work reading the initially-matching "abc" over and over. Let's see the results:

Browser1 "abc"10 "abc"s100 "abc"s1,000 "abc"s
Chrome 88.0.4324.182877,221,230863,929,216 (-1.52%)688,398,676 (-20.32%)125,023 (-99.98%)
Firefox 85.061,384,76212,874,505 (-79.03%)1,366,202 (-89.39%)137,245 (-89.95%)
Safari 14.0.1100,546,43825,948,095 (-74.19%)4,170,375 (-83.93%)467,753 (-88.78%)

Finally, Chrome has similar performance to other browsers. I don't think this is representative of real workloads, but even so Chrome performs admirably. Hats off to the V8 team for this implementation, but honestly, it's good to know they're still mortal!

Comparing the Implementations

So now that we know more about the component pieces involved in this optimization (for example, knowing to avoid single-character test strings), we can check if slice actually gives us a real-world speedup. In setting this up, I decided to go for the middle of the road to try and benchmark the browsers under something that you can kind of squint at and think "yeah, that will probably happen reasonably often in real life." In this case, that meant setup looked like this:

haystack = "abc".repeat(100)
needle = "abc"

Then the indexOf and slice benchmarks looked like the code at the top of this post:

haystack.indexOf(needle) === 0
haystack.slice(0, needle.length) === needle

That gave me these results:

BrowserindexOfslice% Diff
Chrome 88.0.4324.192871,312,454870,810,291-0.06%
Firefox 86.032,428,39446,845,953+44.46%
Safari 14.0.1430,854,4551,352,674,261+213.95%

Note that Chrome and Firefox had released new versions between me testing indexOf and this test. I kept the old data here because it doesn't seem to have changed the performance, but I just want to be up front that the versions are different!

Anyway, that said: looks like a result within the margin of error for Chrome (around 0.4%), but a reasonable speedup for Firefox and a big win for Safari.

What does the failure case look like? (Same haystack, but needle = "nope"):

BrowserindexOfslice% Diff
Chrome 88.0.4324.192874,951,212854,116,367-2.44%
Firefox 85.031,026,72922,180,114-28.51%
Safari 14.0.14,189,7681,508,770,435+35,910.83%

This time, Chrome has a slowdown greater than the measurement error, which surprised me! Same with Firefox, but to a larger degree. But the real story here is Safari, which can now do this operation over 1.5 billion times per second. Billion! What even!

Is It Worth It?

So, with all that measurement done, is this optimization worth it? I'm actually not really sure. This is basically nothing in Chrome but means making some real performance tradeoffs in Firefox. And, even though it looks like a huge win in Safari, my claims can only be as strong as the underlying assumptions I'm making in choosing test data. So, just to be explicit about that:

Of course, if you can measure that your application would benefit from this optimization, go ahead and do it yourself! Calling String.slice 0 (String.length haystack) === needle has some overhead in Elm (due to equality having different semantics) but you may still able to get the majority of the benefits if your users are mostly using Safari or you're running iOS webkit views or something like that. Just make sure to actually measure it! Optimizations at this level can be helpful, but they're more often dwarfed by DOM stuff in the runtime.

As for me: at this point, I'm not going to make a PR to elm/core with this improvement, even I intended to when I set out. However, I found it interesting to look into these things, and I learned a lot about how strings are optimized in various browsers, so I'm gonna call it a win overall.

If you just have questions about this, or anything I write, please feel free to email me!

Otherwise, if you'd like me to email you when I have a new post, sign up below and I'll do exactly that! (You can also get new posts via RSS.)