Short version: If the source video’s frame rate and the export frame rate are the same, then 1.5x is the mathematically worst option, and 1.0x-1.3x or 1.8x-2.0x are the best options.
For other scenarios, here’s the math…
“Keep every Nth frame” is determined by a combination of the source video’s frame rate, the export frame rate, and the speed factor. If I have 30fps source video on a 30fps timeline at 1.0x speed, then N is one (keep every frame). But if I have 60fps footage on a 25fps timeline at 1.0x speed, then N is 60/25x1.0=2.4 which is fractional. A fractional N-value means an uneven cadence of frame selection will happen, causing the playback to stutter every time the Nth frame lands on a whole number. (Note that 1.5x creates a whole number the most frequently of any value between 1x and 2x.) But if I have 60fps footage on a 25fps timeline at 5.0x speed, then N becomes 60/25x5.0=12 which is a whole number (keep every 12th frame), and playback will be a smooth cadence in the time-progression sense. All three values must be considered.
When the source and export frame rates are the same, a speed of 1.5x creates the most uneven cadence because the 0.5 fractional part is the greatest misalignment possible in frame selection. (1.8x is more like 0.2 less than 2.0x as opposed to 0.8 more than 1.0x.)
However, the situation changes if the source is a different frame rate than the timeline. If I have a 60fps clip on a 30fps timeline with a speed factor of 1.5x, then the export engine will look for “1.5 frames from the start” in 30fps time, and find an actual frame for that offset in 60fps time (the third frame in this case) rather than selecting a 30fps frame that is plus/minus half a frame’s time from the requested offset. The smaller and more consistent the difference between “requested” and “actual” timestamps are, the smoother the footage. The conclusion here is that 1.5x looks very good if the source is twice the frame rate as the timeline. Everything else between 1x and 2x will probably look “good enough” as well.
Caveat: If the source footage is variable frame rate from a cell phone, then all bets are off. Going by eye is the only solution there.
On a separate note, the shutter speed used in the source video has a big impact on the final look. I made a time lapse last year while my wife and I set up a tent at a camp site. The camera was snapping JPEG pictures every two seconds, which I compiled into a video. But, I had the shutter speed set for 1/8th instead of the usual 1/50th for video (or I may have used 1/4th… been too long to remember). The point is that each individual JPEG had a lot of motion blur streaks in it as we walked around the camp site. Therefore, when the footage was played back (looking sped up due to the every-two-seconds interval timer), there were nice connective trails of blur that made it look like we gracefully “flowed” around the camp site rather than looking like we were being assaulted by a flickering strobe light. Even in video mode, a lot of modern mirrorless cameras can record 1/8th shutter at 24fps or 30fps by merging light readings from previous frames, and get the same effect. To tie this into the original question… the sweet spot for a speed-up will also depend on the shutter speed used in the source video. The shorter it is, the sooner sped-up video will look jerky and strobe-like, and options will be limited. The longer the shutter, the more options you have because there’s enough connective blur to allow many speed settings to look good.