Curious if we want to upgrade SexLikeReal script services along with upcoming SLR Melody sex toy to protobuf for more efficiency. Funscript will be supported as well.
Isn’t protobuf just binary JSON?
Unleast you are seriously memory- or speed-limited I see no point
Well, or if there’s a protobuf lib that’s better then json+gz+http lib
You are on the right track. Protobuf is a serialization mechanism, but it resembles serializing json to a binary format.
Funscript is a data format based on json so we are comparing apples and pears here. No users should care about how data is serialized and transmitted to a sex toy unless you are writing software that integrates with the device.
Protobuf is typically preferred when using gRPC as a service to service communication channel.
A simple Rest API is usually sufficient, less complex and easier for most to integrate with. However, gRPC is good for realtime and streaming applications, transferring large data. If the device is going to be controlled realtime based on user interaction then gRPC could possibly be a better choice.
Surprised there was no previous discussions. We definitely are doing it under the hood, yet it might be a good time to introduce it as consumer format as well
No need for that IMHO. You can easily convert funscript on the fly for protobuf serialization. So the use of protobuf and gRPC is completely a technical implementation that user/consumers never need to worry about.
As a comparison, TheHandy doesn’t use the funscript format, the servers convert it in the background before uploading the script to the device.
Adding script formats just cause problems on the scripter and tooling sides since more than one format must be handled. Implement protobuf (and gRPC if that is the plan) and hide it in the implementation.
It’s all the same thing just packed differently. I think everyone will be better off moving to protobuf within the whole pipeline
As long as you’ll serve protobuf to those who Accept: application/x-protobuf and json to those who not, literally noone will notice anything
If all you do is change the format, it is not worth it. Every format change creates fragmentation, since some software will not update. Cue that famous XKCD comic.
The scenario where it could be worthwhile is if you were to bundle the format change with some substantial changes / extensions to the schema of the data in ways that would provide tangible benefits to end users. For example, including multiple axes in one file or adding device-specific metadata to allow for dynamic adjustments in stroke length to accommodate the capabilities of different hardware.
Basically if you were trying to push a bunch of schema changes that would require software updates anyway, it’s not so much additional work to say “let’s also change the data format in the process.” End users and tool authors may also be more inclined to use the new format if they think they are getting something useful out of it.
I’m confused by the inquiry.
You can just do protobuf in the backround transparently. Do you mean making a script creator application that outputs binary?
I doubt that will be well received since that would effectively restrict access to the protocol to your own tooling. I’d hardly call that an “upgrade”
There doesn’t seem to be any gain from this switch imo. Firstly, you create vendor lock-in. While this is good for a company perspective it doesn’t allow consumers to freely use script after they have purchased access to them (I guess if you still allow people to download funscripts then this point isn’t as strong). Another point as others have said, it cases fragmentation where there isn’t a true reason around it.
Now converting a funscript to protobuf for better latency via say bluetooth or even a socket stream behind the scenes yeah that makes sense and doesn’t introduce any new.
there’s convertability both ways.
funscript is redundant by 2025 like floppy disks and CD back in the days
That is one of the takes of all time.
Vendor lock-in = a walled garden to keep me out. Careful that the garden plots don’t turn into a swamp…
The very definition of funscript is
type Action = {
at: number;
pos: number
}
type Funscript = {
actions: Action[];
metadata: unknown;
channels: unknown;
}
so in Protobuf is WILL BE THE SAME
syntax = "proto3";
package funscript;
message Action {
uint32 at = 1;
uint32 pos = 2;
}
message Funscript {
repeated Action actions = 1;
bytes metadata = 2;
repeated bytes axes = 3;
}
(that’s roughly what I think you did implement it)
Please post the .proto for further discussion
We are yet putting the whole thing together.
It’s more of idea level discussion now, but we are certainly giving it a test
Will keep you posted
You may check discussion at
https://discuss.eroscripts.com/t/rfc-single-file-multi-axis/267449/114
It has some data on the multi-axis funscript we are going to switch to later this year
(xtp/mfp/ofs already have it, on ES it may be enabled in settings)
(“?” button under the script heatmap)
Maybe a bit too straightforward question, but what’s your background?
Every CS engineer knows JSON is the square hole
Yes, I think protobuf is probably the way to go. We’ve talked about doing it for tcode before. An encoded proto message should be smaller than a json object on the wire and we need that, especially over network transmissions. I think it will be lighter on the embedded system as well.
Changing funscript from json to protobuf just to compress the actions is pointless. Unless its for some new format that requires a lot of new data, like for example for cubic/bezier moves, but even then it’s pretty pointless.
There will be minimal or no benefit for transfer time. 1KB vs 1MB will take the same amount of time to download as most of the time for such small files will be due to the general network overhead, assuming sufficient available download bandwidth.
Makes more sense for device api or streaming live commands.
