That is correct - it switched your profile version at a regular interval.
Indeed, that's the only way I could do it. The Twitter API has its limitations :D
I didn't know there was a specific definition of A/B testing. I'll see if I get more complaints about the terms I use ^^.
To me, that's still A/B testing - that is I'm testing a version A and a version B and then report on which one does better. I guess the way I'm doing it is different :D
Definitely agree that this is a good solution given the limitations, I think the only downside to this approach is that there might be some effect based on time of day or the interval affecting your results. That said I think the chances of that are super low and A/B testing is only so accurate anyway. Great idea and nice website!
> I think the only downside to this approach is that there might be some effect based on time of day or the interval affecting your results.
That is definitely true. I'm about to start alternating the versions every 5m to mitigate this. The closer I can get to 0m, the more accurate the results are. This way even if you get a followers spike (let's say you get a viral tweet), the followers will be properly distributed between each version.
Yep - we've run many switchback tests so I'm happy to chat more about it. It's a lot more akin to what you're building here, from a stats point of view.
I feel like that's less of a problem because the user is more likely to convert on the version that he likes more, which would still provide accurate data.
But yes, there is no perfect solution with the limitations of the API :D
That's cool man apologize for what is for you to apologize, and nothing else. Nothing else. And do your thing in the semblance of your vision of your ideas, all the way and show the world. And the world won't like it.
You gotta be sure of what you know. Be sure and be wrong, better than always being unsure.
Indeed, that's the only way I could do it. The Twitter API has its limitations :D
I didn't know there was a specific definition of A/B testing. I'll see if I get more complaints about the terms I use ^^.
To me, that's still A/B testing - that is I'm testing a version A and a version B and then report on which one does better. I guess the way I'm doing it is different :D