If your intending for a user to go to a third-party website and the site somehow controls a scanner they may or may not have, then that's a deadend. That goes against everything a browser is supposed to do.
Your site should direct them to upload the scanned file, but the actual generation is in the users hands...obviously your site could provide guiding instructions.
If your creating a scanner client and just want to use HTML for the interface, that's doable.
Thanks for your reply,
I am surprised to see replies, that document scanning is not possible from the client-side. well, it is for sure that secure browsers not allowing this access.
But the question arrives here how can we enable a client-end document scanner to be accessed using the web, any new technology, way or possibility.
Well the other solution could be that the user installs a local piece of software, that in effect connects to a 'control' server and connects to the locally connected scanner. Then a separate web site 'could' send a message to the control server which then would tell the local software to act.
But note that whole interaction makes no sense, far to much of the process is the user interacting with the physical scanner. So controlling 'remotely' via web sites doesn't help the process at all.
Now if your actual concern is that the 'scanning, file, upload file to your site' etc process isn't as easy as it could be. Then yes I could understand that, but the solution would be a downloaded piece of software that interacts with all scanner types, scans and uploads the file to just your servers.
I think most people hitting this concern have decided its better to educate (via the websites) the users on how to do the process rather than solve it with software.
At my company there are few programmers that are either lazy, non-quality minded, unintelligent or all of the above. I have written a driver and they have to provide a GUI on top of that driver. Normally, my driver would be very thin with no restrictions, but knowing the quality of my co-workers I have added a validation layer on top of the low-level drivers because I care about the success of my company. How would you recommend I minimize the risk that they bypass the validation layer and call the low-level functions directly? Should I encode my variables, function names and parameters, using some kind of ASCII encoding algorithm, so it's difficult to understand what the functions do? Should I name my low-level functions something like KEVIN_JEFF_AND_MARK_NEVER_CALL_THIS_FUNCTION_writeMemory(uint32_t addr, uint8_t value)? Any other ideas? I've already talked to my boss and he thinks it's too difficult to find new programmers to get rid of them.
Use a wrapper class then. Give the "bad programmers" the wrapper that calls your dll's "open" methods, then make up a stupid name for the dll that they can't guess: StupidProgrammer.dll, for example. Though they have to add stupid dll to the project too.
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food
Don't expose the functions as public then? They should only be seeing the public interfaces you want them to see.
Also start to employ more defensive programming with in those functions, validate all the incoming parameters, and fail early and often if they're not within the spec. If its function order then maybe you could add a audit type layer, so you can ensure function x is called after function y.
Generally though, why would they call a function they don't need? If they're calling it from the GUI then surely there's a spec that says that's what the GUI needs to do? And your driver should be providing a safe method for that?
Got some time left, so revisiting this thread. YOU wanted the discussion.
Everything we do at my company is severless because we don't want to have to deal with hardware in any way. We do, of course, have a laptop, but that's it.
According to that idea, "calc.exe" has a serverless architecture, which is ofcourse, nonsense.
For any desktop app that doesn't communicate, "serverless" is a side-effect, not the main architecture.
One still has to learn about the pro's and con's of every option to make an (informed) choice.
CSLA is a nice option for WinApps, which isn't helpful for calc.exe either. Still works wonders for some applications.
Stuff becomes interesting if you have multiple options; imagine chat - could be serverless, ofcourse, but also could be using CSLA for the UI, and could be completely SOA. So how would you even call that?
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
I'm not sure if you're aware of the marketing buzzword version of "serverless" that's currently being pushed by cloud providers - mostly products that are quick and cheap to get up and running (not to say anything about ongoing costs and vendor lock in). Given the mention of SOA I'd guess the OP has been reading a lot of things about how to build "modern", "cloud-native" applications.
Does it make sense to have multiple resource files per culture, to organize things into logical groups (in opposition to one monolithic file per culture)? I'm thinking things like Labels<.culture>.resx, ValidationMessages<.culture>.resx, etc
Just getting into localization for the first time. (C#, MVC if that's important).
Created a little POC form where I had ResourceTest<.culture>.resx files as embedded resources. Decorated model properties with some data annotations to display the label for the field, a required field message, made the button text vary with the culture, simple things like that. That seems to be working o.k.
Thinking about applying localization to the entire application, it seems as if a single resource file per culture could get huge and unwieldy, so I thought it might be smart to have several .resx files per culture, to make it easier to find existing names/values. Not married to the idea of "functional areas" to group the data.
Doing some googling around, I couldn't find any best practice (or not) on this idea.
I'm using "embedded resources" and with data compression my "data" is less than 1/3 compared to uncompressed. Decompressing the "resource stream" at run time.
The resources were .net "content" objects that were created, then binary serialized, then compressed, then embedded.
The Master said, 'Am I indeed possessed of knowledge? I am not knowing. But if a mean person, who appears quite empty-like, ask anything of me, I set it forth from one end to the other, and exhaust it.'
― Confucian Analects