|
It's reference "junk" when it's in some one else's system that you have no control over. And anything you put in the way of "the payment process" makes you the problem.
A company issues their own "client numbers", Purchase Order numbers, Invoice Numbers, Product Numbers, etc. which becomes "reference junk" in someone else's system that they then send back to you as more "reference junk". You reconcile your own paperwork; not someone else's.
I'd send "checklists" of the outstanding invoices if they wanted to "reference" something.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
This whole process is outside of this discussion. I have no control over how my client processes their invoices.
Thank anyhow
In theory, theory and practice are the same. But in practice, they never are.”
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
|
|
|
|
|
You have (some) control when inputing from documents; then you can use "fuzzy" searching then and there.
You said CSV ... which means, you have to get it into the system "before" you can do anything with it. Throughput.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
Kevin Marois wrote: What's the right way to go about this?
Your requirements description is incomplete for a start.
You must also answer the question - what do they want to happen when it is wrong? For example discard the entire csv, ignore than one row, collect the failures and present immediately to a human, collect somewhere and allow for a report. Might be some others.
You should also decide who actually manages it? Is it set up on install? Can it change day to day? Can different users change it (so not the customer but individual customer users.)
Those drive how you configure it. For example if install then you would need to ask that during install. (This probably is not viable since their needs might change over time.)
But other than that your application should have a section specifically for customer configurations. Yes plural. Presume one now and more for the future. And possible one for the application (admin users only) and one for normal users. You application might not support multiple users so the different levels might not apply.
With both application level and user level then you need to decide if the user one overrides the application one completely or if both work.
You can save the configuration in a configuration file or a database. Or other persistent store. You would normally load the configuration on start up. Naturally updates while running must impact the loaded configuration also.
|
|
|
|
|
I agree 100%, have some or other way to capture this to be used in the future, irrespective of which customer adds what as their own reference. Just do a callback then to that specific customer and you should be able to tie up the reference to the correct file reference.
|
|
|
|
|
I just started working with a business that made a web application that has a nodejs-expressjs backend api and a react front end. The business wants to sell its software as a white label solution to some enterprise sized businesses. My manager says that the customers will be expecting a detailed report to convince them that our solution is "secure". I need to determine steps to producing such a security report.
My first thoughts are to follow these steps:
1. Run the npm audit command on our backend and front end projects to identify all known vulnerabilities. And then fixed them according to recommended approaches I read about on the internet. This step has been done. The npm audit command shows no vulnerabilities or issues of any kind.
2. We upload our code as docker images to dockerhub.com. Dockerhub shows a list of vulnerabilities for us to address. I am currently in this step, and I have some issues which I will elaborate further down in this post.
3. Hire a 3rd party cyber security firm to test our solution. This firm will give us a report of issues to address.
That's my overall plan. However, I am currently stuck on step 2. Dockerhub is showing me MANY Critical and High priority vulnerabilities such as the following:
cve-2021-44906 - An Uncontrolled Resource Consumption flaw was found in minimist
https://access.redhat.com/security/cve/cve-2021-44906
CVE-2022-37434 - zlib through 1.2.12 has a heap-based buffer over-read or buffer overflow in inflate in inflate.c via a large gzip header extra field.
https://nvd.nist.gov/vuln/detail/CVE-2022-37434
...etc...
According to dockerhub, there are about 100 of these types of vulnerabilities, where maybe 10% are critical, 15% are high, rest are medium or low. These issues look very difficult to address, because they are used by modules of modules that I don't directly access in my own software. Trying to replace these modules of modules basically means a complete rewrite of our software to not depend on ANY open source solutions at all! And I'm sure if I were to scan packages with another type of scanner, different sets of vulnerabilities would be exposed. And I haven't even gotten to step 3 yet.
So this got me wondering...how do other organizations selling white labelled solutions go about disclosing vulnerabilities to their end clients and how do they protect themselves?
I started thinking that maybe I don't have to deal with every single security vulnerability that exists. Instead, I should only address security issues that I am confident hackers will exploit or things that are easy to address. Then I hire a security party firm to find other vulnerabilities. Anything that's not caught by the security firm we deem as "not important". And we develop some contract and service agreement that protects our business from the legal actions if our clients experiences a security vulnerability not covered in our report?
But then, a customer will say, "But dockerhub.com clearly shows vulnerability X, and you as the seller were aware of vulnerability X, please justify to us why you did not address it." And how do we respond then?
That's what's in my head right now.
So back to my original question - what steps should a team take to address security concerns of a software that will be white labelled and sold to customers?
|
|
|
|
|
(Duplicate post).
If you're getting a "third party" to "certify" your software, you should be consulting with them, not the public.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
mozilly wrote: My first thoughts are to follow these steps:
That is not how you go about it.
That is like attempting to write code when you do not even know what the requirements are.
mozilly wrote: My manager says that the customers
Any larger company will expect this. Mid-size are also likely. Depending on the business domain every customer might require it.
mozilly wrote: what steps should a team take to address security concerns
Obviously application security is a part of it. But also company security.
Large companies will require 3rd party security audits. Smaller ones might also.
Steps
1 - Investigate various parts of security needed.
2 - Software security
3 - Employee training
4 - Employee access. And specifically how access is turned off when an employee exits the company and who has access to what.
5 - Reviewing code for security vulnerabilities - specifically. Tools and manual.
6 - 3rd party audits.
7- A DOCUMENTED Security Plan for the company. That includes all of the above.
8 - DOCUMENT all of the steps taken (which would be in the Security Plan.) You will need to track where those documents live.
9 - The Security Plan must include how to DOCUMENT exceptions to the plan and solutions to problems discovered.
10 - One or more people assigned to the Role of insuring that the Security Plan is followed.
3rd party audits will likely look at all of the above.
People tend to skip 9 because they think/claim that those will not occur. Then when they do they don't have any way to deal with it and thus end up ignoring the issue.
|
|
|
|
|
I just started working with a business that made a web application that has a nodejs-expressjs backend api and a react front end. The business wants to sell its software as a white label solution to some enterprise sized businesses. My manager says that the customers will be expecting a detailed report to convince them that our solution is "secure". I need to determine steps to producing such a security report.
My first thoughts are to follow these steps:
- Run the
npm audit command on our backend and front end projects to identify all known vulnerabilities. And then fixed them according to recommended approaches I read about on the internet. This step has been done. The npm audit command shows no vulnerabilities or issues of any kind. - We upload our code as docker images to dockerhub.com. Dockerhub shows a list of vulnerabilities for us to address. I am currently in this step, and I have some issues which I will elaborate further down in this post.
- Hire a 3rd party cyber security firm to test our solution. This firm will give us a report of issues to address.
That's my overall plan. However, I am currently stuck on step 2. Dockerhub is showing me MANY Critical and High priority vulnerabilities such as the following:
...etc...
According to dockerhub, there are about 100 of these types of vulnerabilities, where maybe 10% are critical, 15% are high, rest are medium or low. These issues look very difficult to address, because they are used by modules of modules that I don't directly access in my own software. Trying to replace these modules of modules basically means a complete rewrite of our software to not depend on ANY open source solutions at all! And I'm sure if I were to scan packages with another type of scanner, different sets of vulnerabilities would be exposed. And I haven't even gotten to step 3 yet.
So this got me wondering...how do other organizations selling white labelled solutions go about disclosing vulnerabilities to their end clients and how do they protect themselves?
I started thinking that maybe I don't have to deal with every single security vulnerability that exists. Instead, I should only address security issues that I am confident hackers will exploit or things that are easy to address. Then I hire a security party firm to find other vulnerabilities. Anything that's not caught by the security firm we deem as "not important". And we develop some contract and service agreement that protects our business from the legal actions if our clients experiences a security vulnerability not covered in our report?
But then, a customer will say, "But dockerhub.com clearly shows vulnerability X, and you as the seller were aware of vulnerability X, please justify to us why you did not address it." And how do we respond then?
That's what's in my head right now.
So back to my original question - what steps should a team take to address security concerns of a software that will be white labelled and sold to customers?
|
|
|
|
|
We're getting pressure from one of our customers to internationalize our software product. All of our currency and dates are handled correctly or get fixed quickly. We have about 70% of the words translated via resx files. There are also some database translations where we allow customization. All of this works.
However, since it isn't 100% complete it came up in a discussion from management. One of the devs wants to remove the resx files and put all translations in a database table (actually 3). Curious if anybody out there has any strong opinions on why database only vs resx translations are better or not. There are articles out there and stack overflow questions, but most of it is older.
Is resx still in favor? Is it a good choice. My feel is re-working all of the resx for some new custom format isn't a good use of our time.
Thanks for your thoughts.
Hogan
|
|
|
|
|
That's ... interesting.
Putting every string of text into the database is only going to slow down the entire app and, as an added bonus, give any admin users the ability to change the translations at will! Imagine that fun with a disgruntled admin!
Oh, and when the database access fails, how do you go to the database and get the localized version of the messages to explain that it can't get to the database?
It makes sense for some things, like customer data, but not for localization of the app.
|
|
|
|
|
On top of the obvious drawbacks that Dave pointed out, translations have a bad habit of being longer than their English equivalent (in many/most cases). A four letter word like "Desk" can become a 12 letter word like "Schreibtisch" in German. I read somewhere that you should expect a 15 to 30% increase when you go from English to other European languages. A user interface that is not size adjusted looks clunky with truncated fields or big empty spaces.
Mircea
|
|
|
|
|
Mircea Neacsu wrote: translations have a bad habit of being longer than their English equivalent (in many/most cases) This is most certainly true if the translating is done by a native English speaker. When I translate Norwegian text to English, the English text is frequently longer, and most certainly in the first version. I do not know the entire English language, often using rather cumbersome linguistic constructions where there is a much shorter way of expressing it. Same with the native English speaker: He doesn't know Norwegian well enough to find the short (and correct!) Norwegian translations.
Following up your example: A translator might look up 'Table' in an English-Norwegian dictionary, finding 'Skrivebord' (a close parallel to the German word). 'Skrivebord' is long and cumbersome for daily talk; we say 'pult'. (That is if you pronounce it with a short 'u' sound. With a long 'u' sound, the English equivalent would be 'had intercourse', although that is not very likely to appear in a user interface ) Also note that both the Norwegian terms are more specific that the English 'table': It is not a dinner table, not a sofa end table, not a set of columns, not a pedestal for a lamp, artwork or whatever. It is a worktable where you do some sort of writing ('skriving'). If you need to express that specific kind of table in English, the English term will increase in length.
Finding individual words that are longer in other languages than in English is a trivial task. So is the opposite. I have written quite a few documents in both English and Norwegian versions, and translated English ones not written by me into English. If the number of text pages differ at all, it is by a tiny amount.
On the average, that is. A user interface usually needs a few handful of words, some shorter, some longer than the original language. You must prepare for those that are longer - and you don't know which those are. So when translating from any original language, not only English, to any other language, including English, be prepared for single terms, prompts etc. being 30% larger than the original. (15% is definitely too little.) Although there is a significant effect of the translator not knowing the target language well enough, the increased length may be completely unavoidable. Regardless of original or target language.
|
|
|
|
|
Couldn't agree more. My point was that a simply plucking text from a database and putting it in a user interface will make it look bad to the point of being useless.
Here is a horror story I've seen "in a galaxy far, far away".
Programmer who knew everything tells his team: just put all the texts you need translated between some kind of delimiters. I'm going to write a nice little program that extracts all those texts, put them in a database and pass them to the i18n team. They will just have to enter translations for those texts and Bob's your uncle, I solved all these pesky problems.
Trouble came soon after, first when they realized some words had multiple meanings. In English "port" can be a harbour or the left side of the ship but in French "port" and "bâbord" are very different words. Translators had no clue in what context a word was used, besides they could enter only one translation for a word. Source code also became a cryptic mess where something like SetLabel("Density") became SetLabel(load_string(ID_452)) . Some of the texts where too long, others too short, in brief such a mess that most users gave up on using localized translations and stuck to English. But the programmer who everything remained convinced he solved the problem.
Moral of the story: humans are messy and their languages too. There is no silver bullet and text in a database is very, very far from being one.
Mircea
|
|
|
|
|
I just have to add my 'horror story' from at least as long ago:
My company company went to a professional translator to have the prompts for the word processor (remember that we used to call it a word processor?) to German. The translator was given all the strings as one list of English terms. Nothing else. No context indication.
This text processor had, of course, a function for 'Replace all -old text- with -new text-', with a checkmark for 'Manual check?'. In the German translation this came out as 'Ersetze -- mit --', and a checkbox 'Handbuch kontrollieren?'
This was discovered in time for the official release, but only a few days before.
|
|
|
|
|
This point exactly. He has continued on the research ticket and identified how many duplicate translated items we have in the system. I believe he intends the optimization specified above.
Hogan
|
|
|
|
|
Yes but unfortunately there are very few companies, if any (even very large) that can afford to hire 100 people fluent in living languages to work exclusively on context translations for each software project.
And keeping in mind that 100 is not even close to the number of identified living languages. But it likely is close to what one might consider a viable market.
So one just hopes that one can get by.
Mircea Neacsu wrote: Trouble came soon after, first when they realized some words had multiple meanings.
I have worked for a number of companies that had no problems using services to provide translations based on provided text.
And there are more difficult problems than just providing the context for a specific word.
Mircea Neacsu wrote: gave up on using localized translations and stuck to English.
France and Quebec (province of Canada) both have laws that basically state that a company cannot require an employee to speak/read any language except French. So if you bring in that software there the company could end up with a number of employees sitting around staring at the walls all day.
And the governments stipulate that the software they use must be in French. You can't get the contract without agreeing to that.
Mircea Neacsu wrote: became SetLabel(load_string(ID_452)).
If programming was easy they wouldn't need people to do it.
|
|
|
|
|
jschell wrote: France and Quebec (province of Canada) both have laws... I lived in Montreal for over 30 years so I know a bit about language laws in Quebec. Accidentally I also know a bit about those in France. I cannot say more because I would run afoul of CP rules 🤐
No amount of regulation can force people to use a dysfunctional product. They will find a way to go over/under/behind those regulations. If, in your case, a database or a simple text file was good enough, more power to you
Mircea
|
|
|
|
|
jschell wrote: And keeping in mind that 100 is not even close to the number of identified living languages. But it likely is close to what one might consider a viable market. If you cover 100 languages, you are bound to also run into a lot of cultural aspects that is not language specific or based.
20 years ago, 'everyone' wanted to collect the entire internet in their databases. Archive.org is one of the (few) survivors of that craze. I was in it, and went to an international conference. Access control to the collected information was an essential issue, and one of the speakers told that he had been in negotiations with delegate from US native groups about how to protect information that should be available only to males, or only to females. Also, some information should be available only during the harvesting season, other information only during the seeding season. The limits of either of course depended on the kind of crop.
Needless to say, the access control of the system presented by the speaker did not have sufficient provisions for these demands. He presented it as an unsolved issue. If we simply state "We can't honor such cultural restrictions - The whole world must simply adapt to our culture, accept our freedoms (and most certainly respect all our taboos)!", then we are cultural imperialists as bad as in the era of colonization.
And we are.
|
|
|
|
|
Mircea Neacsu wrote: There is no silver bullet One could use a trick we used in the 1980's, when IT books where not translated.
You learn English
Bastard Programmer from Hell
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
99% of my heavy criticism of computer book authors and their editors is directed towards English language textbooks. They are most certainly no better than the translated ones.
I guess that part of the problem is that major parts of the English speaking world (read: in the Us of A) do not read very much any more. Their critical sense to reject (sometimes very) bad books, from a language, editorial and presentation point of view has worn out. They do not know how to distinguish a well written book from a crappy one. So the fraction of crappy books is steadily increasing.
My impression is that the average IT textbook written in other languages (my experience is with Scandinavian languages, but I suspect that it holds for a lot of other languages) is written under a lot stricter editorial control, and is a lot less smudged with 'edutainment' elements, going much more directly to the point. So the number of pages are about half.
Originating in the US of A has not in any way been any guarantee for quality for an IT textbook. Quite to the contrary. When I feel the temptation to dig out my marker and my pen to clean up the text, I often think of how I could reshape this text into something much better in a Norwegian edition, half the pages. But at the professional level I am reading new texts today, the market for a Norwegian textbook is too small for it ever to pay the expenses. Making an abridged English version would lead to a lot of copyright issues.
|
|
|
|
|
trønderen wrote: Originating in the US of A has not in any way been any guarantee for quality Full stop there, as that is not just limited to books.
Learning Enlish (not American) gives you a wider range, just as learning to write in English does. To drive that point home, our little CP community is English only.
Bastard Programmer from Hell
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
trønderen wrote: I guess that part of the problem is that major parts of the English speaking world (read: in the Us of A) do not read very much any more. Their critical sense to reject (sometimes very) bad books, from a language, editorial and presentation point of view has worn out. They do not know how to distinguish a well written book from a crappy one. So the fraction of crappy books is steadily increasing.
I doubt the implied cause there.
It is much easier to produce (publish) a book now than even 20 years ago. And much, much easier than 50 years ago.
And it is orders of magnitude different for self publishing.
50 years ago one would need a publisher to accept the book and then an editor working for that publisher would edit it. (Not totally true but one would need much more knowledge and money to self publish then.)
Now even when that path is followed the role of the editor is less. Probably due to the publisher wanting to save costs but also because there are so many more books published.
I would be very surprised if the publishers were not seeking quantity rather than quality now. Much more so than in the past.
I suspect all of those factors have even more of an impact for 'text books'. After all just one consideration is that there is quite a bit of difference in editing a romance novel versus editing a programming language book.
|
|
|
|
|
The reduction of quality is most certainly not limited to self published books. I guess every English IT book I have bought(*) was published by what everybody would classify as highly respected publishing houses. These no longer need to spend resources on keeping the quality up, through editors and reviewers. The books sell anyway.
One thing that one could mention to explain all the talkety-talk and lack of conciseness: The entry of the PC as a writing tool. When the authors were still using typewriters, doing editing was much more cumbersome; it required a lot more work to switch two sentences around, or move a paragraph to another chapter. The first thing that happened was that authors wrote down every thought they could think of, without filtering the way they did before. The second thing was that they forgot how to use the delete key, and how to do cut and paste to clean up the structure of the text.
I guess that the cost of publishing, the process, makes up a larger fraction of the budget today. The cost of the paper is a smaller fraction than it used to be. Publishing/printing a 600 page book is not three times as expensive as a 200 page one. (Well it never was three times as expensive, but the cost of the materials made much more impact on the sales price 50 years ago.)
(*) I have got one self-published IT book - Ted Nelson: Computer Lib/Dream Machines[^], the book introducing the concept of hypertext. It was published 49 years ago, before you had MS Word for writing your manuscript. Most of it is typewriter copy, or hand written. This is probably the first IT book I'd try to save if a fire broke out in my home.
|
|
|
|
|
Right. "Time" or "lag".
Resource files are easier and faster to update versus a "resource management system" sitting on a server (IMO).
Your can easily write a file parser at some point to report on your "resources".
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|