|
Way back in the dim dark past we simply added a value to the ascii of each character effectively moving it out of the normal text range, reverse the process to read the content. I think it was 75.
Never underestimate the power of human stupidity -
RAH
I'm old. I know stuff - JSOP
|
|
|
|
|
I think I will just zip the file and give it .bin extension, that should make it unreadable.
|
|
|
|
|
If it is on a webserver then you should simply pick an extension that the server doesn't "serve"; also, if you were using a database then you could simply have some tables that you don't make accessible over a UI.
Do take into consideration that some non-techies call in the help of a forum to hack into "unreadable"
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
If you go to zip, why not use the proper encryption facilities of zip itself?
|
|
|
|
|
True encryption seems a bit overkill. But I might add a password, just in case someone figures out that it's a zip-file.
|
|
|
|
|
Giving a false idea of security.
online zip password cracker - Google Search[^]
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
The users of the application are my co-workers, not ill-intentioned hackers.
|
|
|
|
|
Being one does not automatically mean that you cannot be the other.
Multiple people wrote: You should treat all user input as potentially malicious.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
If the filesystem is NTFS you could store the real data in an alternate data stream. A casual user would be unlikely to find that.
|
|
|
|
|
What tech/architecture is recommended to handle a high amount of real-time data From Server to the clients?
Please note - Gathering data from Client to server is not difficult. We have ingestion points to collect the data.
But for sending data from Server to Client. Looks like it needs some good study and evaluation.
The cave-man idea that immediately comes to me is using Sockets and keep streaming data back.
Is there any recent frameworks/services available that does this best?
Google Cloud/Firebase has the options, we do use. But anything better out there?
Think of it as even a Share Market Application. Where you have to update all the connected clients every few seconds with Real-Time data This is not domain I'm into but the requirement is technically same. It's all about sending data in real-time.
Client - Web/Android/iOS
modified 26-Apr-20 10:00am.
|
|
|
|
|
Just to be sure I understand, you want to send lots of data from a server to the clients? Could there be more than one server doing this?
|
|
|
|
|
Sorry, just updated my question to be more clear.
It's Server to Client.
Greg Utas wrote: Could there be more than one server doing this?
Yes, but we could even discuss a single server scenario sending data to 1000 clients, to begin with.
To keep the use-case simpler.
|
|
|
|
|
I don't know which frameworks are suitable for this. If you have to start from scratch, try TCP. If it doesn't scale to meet your needs, look at RTP[^]. If message loss or reordering is acceptable, you could even go with basic UDP. It shouldn't be hard to change the protocol if your software is well structured.
You might find my article Software Techniques for Lemmings (link in .sig below) useful. It describes things that should generally be avoided in serious servers.
|
|
|
|
|
What about a multicast scenario? Would that work? Do all clients want the same data? I honestly don't know anything about multicast, other than there's such a thing. It might work for you, but you'd have to look at routing issues and maybe a way to resend when a client misses some data or goes offline.
Keep Calm and Carry On
|
|
|
|
|
Various protocols support multicasting. If you start investigating RTP, you will find related protocols for multicasting (streaming) video. They usually allow dropped frames, so it's a question of whether it's OK for clients to not have all of the server's information.
Off the top of my head, I would implement this with three threads:
- The first thread writes the data that will be streamed to clients into a large circular buffer.
- The second thread services the clients by reading from the buffer to
TCP-send each client its next batch of data. - If clients acknowledge reception so as not to lose data, the third thread uses
TCP-poll and TCP-recv to handle acknowledgments on the client sockets.
The buffer's start pointer advances over data after it has been streamed to, or acknowledged by, all clients.
Edit: You might want to look at the Robust Services Core (see link in .sig), an open-source framework that I developed. It could probably be used to develop a server that supports this kind of application. What's in its nb and nw directories would probably be adequate.
modified 26-Apr-20 11:06am.
|
|
|
|
|
|
Nand32 wrote: It's Server to Client. Like a generic download, similar to downloading a DVD?
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
|
Use pull technology; "sending" / pushing rarely makes sense. You have better control of the "cycle" time, which you say in your case is "seconds".
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food
|
|
|
|
|
Cool. I'm also thinking of a hybrid idea. I've done this before.
The hint that the new data is available happens through "Push".
But the decision to fetch or not & the actual data-pull happens separately.
The notification mechanism will not be required if it's a consistently high-speed requirement like Sharemarket.
If the solution involves stopping data production intermittently, then a Push-notification mechanism to trigger start/stop could be useful.
Will try it out.
Thanks a lot.
|
|
|
|
|
Nand32 wrote: The hint that the new data is available happens through "Push". I'm happily querying a webserver for a file by requesting its header along with the last datetime of my file. If my file is from the same date, it will only send a header back with a 403 (Not modified) (from cache).
If the clients need to be useable during the download, I'd go for the BITS-service that Windows uses to download its own updates. If the fetching of the data is more important than the clients responsibility, I'd search CodeProject for a download-manager and open 10 connections to the server from the client and have each download 1/10th of the file.
If speed is paramount then I'd recommend QNX, not Windows.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
|
|
I have an app that has 3 sets of features. The Agent, Dispatcher, and Admin.
Each's ones use the same Domain Models but the context of use is different.
And now, the Admin gets the Ticket Object that has data in it used by the Agent and the Dispatcher.
The app has a backend in Java and frontend in Angular. The communicate overt HTTP + JSON.
Should I separate them by domain, and create 3 separate microservices?
Would that be an overkill, as the app is not big?
Or should I keep the app as a monolith, and just reorganize the code in packages Admin Package, Dispatcher Package, etc... ?
|
|
|
|
|
Since all those features use the same bounded context there is really no need for splitting up the system in microservices.
Quote: and just reorganize the code in packages Admin Package, Dispatcher Package
This is nice idea. Sort of vertical slices.
|
|
|
|