I would like to make it "impossible" for non-technical people to direcly read a plain text file in e.g. Notepad or Microsoft Word, but I should be able to modify the file in a simple way and then be able to read it. One idea would be to use some kind of 16-bit/character ASCII encoding and then remove the first byte to make it unreadable and put it back when I want to read it. Would this work in practice and what file extension and editor (Wordpad?) should I use to properly read the file? Any other ideas? I am working in standard Windows 10 environment that comes with basic applications such as Notepad and Wordpad and I don't want to write any software or install any new applications.
One idea would be to use some kind of 16-bit/character ASCII encoding and then remove the first byte to make it unreadable and put it back when I want to read it.
Where would you put these bytes, and how would you replace them when you wanted to see the clear text? But the real question is why do you need to do this? What information are you trying to secure, and why can you not just save it somewhere safe?
Where would you put these bytes, and how would you replace them when you wanted to see the clear text?
I would put the byte first in the file, causing the upper and lower bytes being swapped in the 16-bit ASCII format. I would do this using Notepad, which operates on 8-bit ASCII and to view it I would use an editor that supports 16-bit ASCII. Does anybody know if RTF-format would be a good choice?
Richard MacCutchan wrote:
But the real question is why do you need to do this? What information are you trying to secure, and why can you not just save it somewhere safe?
I need to create an index to a bunch of files on a server folder that you are not supposed to access directly, you are supposed to go through a web interface. Unfortunately, the web interface is a bit naive and doesn't reflect the truth what's inside the server folder. Most users don't know the path to the server folder and I would like to keep it that way.
You are correct that this is not the best solution, the best solution would be to link the file index into the .exe-file and then, when needed, alter the .exe-file using the rename-exe-file-when-it-is-use-and-create-a-new-with-the-original-name-technique but in case I can't pull that off I would like to have a backup solution.
If it is on a webserver then you should simply pick an extension that the server doesn't "serve"; also, if you were using a database then you could simply have some tables that you don't make accessible over a UI.
Do take into consideration that some non-techies call in the help of a forum to hack into "unreadable"
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
What tech/architecture is recommended to handle a high amount of real-time data From Server to the clients?
Please note - Gathering data from Client to server is not difficult. We have ingestion points to collect the data.
But for sending data from Server to Client. Looks like it needs some good study and evaluation.
The cave-man idea that immediately comes to me is using Sockets and keep streaming data back.
Is there any recent frameworks/services available that does this best?
Google Cloud/Firebase has the options, we do use. But anything better out there?
Think of it as even a Share Market Application. Where you have to update all the connected clients every few seconds with Real-Time data This is not domain I'm into but the requirement is technically same. It's all about sending data in real-time.
I don't know which frameworks are suitable for this. If you have to start from scratch, try TCP. If it doesn't scale to meet your needs, look at RTP[^]. If message loss or reordering is acceptable, you could even go with basic UDP. It shouldn't be hard to change the protocol if your software is well structured.
You might find my article Software Techniques for Lemmings (link in .sig below) useful. It describes things that should generally be avoided in serious servers.
What about a multicast scenario? Would that work? Do all clients want the same data? I honestly don't know anything about multicast, other than there's such a thing. It might work for you, but you'd have to look at routing issues and maybe a way to resend when a client misses some data or goes offline.
Various protocols support multicasting. If you start investigating RTP, you will find related protocols for multicasting (streaming) video. They usually allow dropped frames, so it's a question of whether it's OK for clients to not have all of the server's information.
Off the top of my head, I would implement this with three threads:
The first thread writes the data that will be streamed to clients into a large circular buffer.
The second thread services the clients by reading from the buffer to TCP-send each client its next batch of data.
If clients acknowledge reception so as not to lose data, the third thread uses TCP-poll and TCP-recv to handle acknowledgments on the client sockets.
The buffer's start pointer advances over data after it has been streamed to, or acknowledged by, all clients.
Edit: You might want to look at the Robust Services Core (see link in .sig), an open-source framework that I developed. It could probably be used to develop a server that supports this kind of application. What's in its nb and nw directories would probably be adequate.
Cool. I'm also thinking of a hybrid idea. I've done this before.
The hint that the new data is available happens through "Push".
But the decision to fetch or not & the actual data-pull happens separately.
The notification mechanism will not be required if it's a consistently high-speed requirement like Sharemarket.
If the solution involves stopping data production intermittently, then a Push-notification mechanism to trigger start/stop could be useful.
Will try it out.
Thanks a lot.
Last Visit: 31-Dec-99 19:00 Last Update: 17-Jan-21 10:10