|
Do keep in mind that there are people out there who are running portscans in their free time. You might find strange things happening in your database if you hook it up to the internet without some precautions;
Set the thing to "Windows Authentication" and turn the SA-account of. There could be a checklist on Google, didn't check that - but you might want to
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
When your server is behind the router, then it has a "private" IP address, hasn't it? Then things get complicated. You could create a VPN, such that home and office are virtually in the same network (same range of IP addresses), or you have to configure port forwarding on your router. I cannot give details on how to do that...
|
|
|
|
|
Dear Readers,
Above question is a "small" question about something I'm at a dead end at the moment. How do I design a database to store all kind of addresses from all over the world? All countries have different address specifications. Some countries have more address fields than others, and/or of different structure.
This is just a fraction of a larger problem I'm facing. I need to redesign a database that is country specific to a global design. Addresses might be the most easy thing to globalize.
How do you design/model such a "global" database?
Thanks in advance,
Rémy Samulski
|
|
|
|
|
Cool problem.
The answer is to normalize properly.
One table for countries. One for address types, such as delivery addresses or box addresses and so on, per country.
One for address parts with one row per street housenumber, town, zip and so on.
Another one for holding the combination of addressparts and addresstypes, with ordering, or rather placement on the letter.
You'll need one table to hold the addresses as an entity, with an address type reference, not the actual data.
This table could of course be combined with a person or company if that's fitting, but I woud split it as a company might have more than one address.
And lastly one for holding the actual data with one row per address_type_part and address.
Make a model of this and check if I've forgotten anything.
modified 11-Oct-12 6:18am.
|
|
|
|
|
Jörgen Andersson wrote: street, housenumber
Street and housenumber are a single fact, one entity. You will not use them as separate facts in the database, and it makes little sense to add an extra field for this.
On topic; there are houses without a number on Curacao
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
Ok, nitpick , I'll remove that comma, and I'll remove the name and surname too as they don't belong in the address.
Houses without numbers are quite common also in the UK and Sweden.
|
|
|
|
|
Beep!!! Wrong answer
Not all addresses have a number, for example the UK all of these are quite valid
(House Number + Street - probably the most common)
123 Any Street
(House Name followed by street on the next line of address)
Mayflower Cottage
Some Street
(House Name followed by number + street on the next line of address)
Mayflower Cottage
123 Some Street
(Flat number + Block Name followed by street on the second line)
123 Mandela House
Some Street
(Office addresses are often Office number + Block Name followed by Street number + Street name)
7 Imperial House
123 Any Street
|
|
|
|
|
Well there are of course different ways. One way is to create a dynamic database:
Create a adress-table with fields: DataId INT, FieldName VARCHAR(X), FieldValue VARCHAR(Y) or BINARY
Index is (DataId, FieldName)
Instead of storing "New York" in a column "CITY" you store "City" in "FieldName" and "New York" in FieldValue".
You can also define Field-Groups: like "Adress" which my occur more than one time, by adding another column Line INT, that allows to store an Address 1, Adress 2, ..., Adress n and of course a Phone 1, Phone 2, ..., Phone n
If n isn't large enough you can even store x values
This way ain't efficient for tables with a lot of complex queries, but I think you don't need to have comples queries here, so the responde-time should be good enough in this case.
------------------------------
Author of Primary ROleplaying SysTem
How do I take my coffee? Black as midnight on a moonless night.
War doesn't determine who's right. War determines who's left.
|
|
|
|
|
LiQuick wrote: How do you design/model such a "global" database?
A varchar for the name of the recipient, a varchar for both address-lines, and a designation for a country. That's as far as I'd normalize the structure - you will not be using the zipcodes to filter on, and this way you can support very exotic addresses. Even those where they include an extra line that says "deliver on second floor".
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
Won't work in Japan where they (to my knowledge) write the address in the "wrong" order. Starting with district, then town, zip, subarea, street, streetnumber and name.
I'm writing this from memory though, so you have to give me some slack as it's probably wrong in some part(s).
UK also needs up to six lines in the address.
I have to admit though that we're using your structure in our internal systems, but with extra info for zip and town as we actually do filter on those fields occasionally.
We only work inside our own country.
|
|
|
|
|
Jörgen Andersson wrote: UK also needs up to six lines in the address.
Hence the suggestion to treat it as a single entity and provide a varchar-field.
Jörgen Andersson wrote: I have to admit though that we're using your structure in our internal systems, but with extra info for zip and town as we actually do filter on those fields occasionally.
If you treat the address as a "complete" fact, and add the zip-code as "extra info", then there's little problems; it's merely a bit overhead to have the zipcode twice.
Normalization is hard, and one does not simply cut a piece of information into what looks as being 'atomic blocks'; if normalization would work that way, you'd need 7 fields for each date (Day, month, year, hours, minutes, seconds, timezone), and all the months would be in a single table
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
Eddy Vluggen wrote: if normalization would work that way, you'd need 7 fields for each date (Day,
month, year, hours, minutes, seconds, timezone),
I've seen that done, minus the timezone. That was a freaking moment.
|
|
|
|
|
Jörgen Andersson wrote: I've seen that done, minus the timezone. That was a freaking moment.
I imagine! I wanted a far-fetched example, so encountering something like that in the wild is a bit of a wtf-moment.
Codd did not say 'eliminate every repeating group', as that would imply that we cannot even store the day number '12'. It would repeat for each month, after all. Putting the day-numbers in a separate table and linking to them using a Guid would be an option, but also a bit insane.
The 'worst' modeling decision that I remember are storing everything as a varchar, including an array of bits; actually stored as a "11101010001100"-string. When I asked who wrote it (same tone as usual), my boss-for-the-moment responded saying it was an optimization
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
Bit array as varchar scores quite high on the list of wtf moments
The really stupid part of storing time in six different fields is that they are all just different representations of the same thing (time) and you can calculate one from another.
If there wasn't a date type available I'd simply store seconds in a number field, which coincidentally is what happens behind the doors in a database.
|
|
|
|
|
Jörgen Andersson wrote: If there wasn't a date type available I'd simply store seconds in a number field, which coincidentally is what happens behind the doors in a database.
The beauty of having it in seconds is that it's once again a single (atomic) fact, without any cultural formatting or localized representation
It's hard to keep things 'simple'.
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
Since this is an existing application presumably with multiple international clients presumably you will already be familar with dealing with issues about how the data will be used and how it is viewed. That will impact some of the design.
I would expect that you would need to use a template which defines addressing schemes. That would be a dynamic (loaded from somewhere) part of the application to define layout for display and printing.
The following is interesting read in terms of possible variations.
http://www.columbia.edu/~fdc/postal/[^]
|
|
|
|
|
Thank you all for your time and information!
As I'm a programmer in Heart and Soul I was hoping something upon a solution similar to object inheritance where I can modify a base table into a new table , but maybe I'm way far of the planet earth.
I'm considering all your answers and try to figure out a neat solution. At this moment I'm thinking about a common table (where data resides that belongs to all variants) and a "varianttype-key-value-type" (dutchAddress-roomnumber-34-int) table in which I store differences.
|
|
|
|
|
LiQuick wrote: As I'm a programmer in Heart and Soul I was hoping something upon a solution similar to object inheritance where I can modify a base table into a new table , but maybe I'm way far of the planet earth.
I do not see OO as an information-modeling language. Every db will support foreigen keys, and you could copy the concept of inheritance;
HUMAN
-----
Id
Name
Gender
DoB
EMPLOYEE
--------
Id
HumanFk (FK to HUMAN.Id)
HiredOn
HasAccessToSuperSecretFiles
..but in a relational world, that will make things worse. You'd get *VERY* complex queries, and most UI's would not be prepared to handle an answer that comes in various forms.
If it's handled as a single entity and used as one, then store it as one.
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
I'd say that it depends.
It's often a balance between complex queries vs. performance. You might end up with extremely wide tables with a lot of null values where you will need many indexes instead of having one large index in the "HUMAN" table and several small optimized indexes for the "Inherited" tables.
So it is a decision that can't be generalized, but rather one you need to make depending on the situation.
|
|
|
|
|
Jörgen Andersson wrote: I'd say that it depends.
It spells trouble, because the software that will use it will have to be prepared for a lot of scenario's. It would require "communication" and "human interaction". Two very obvious failure-points.
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
Well, I've had to clean out faulty data to many times because the software wasn't, and the communication didn't happen. So the human interaction entered data that shouldn't have been.
Clean unduplicated data is the main reason to normalize, performance is a positive sideeffect. But you lose time during development, that's for sure.
|
|
|
|
|
Jörgen Andersson wrote: But you lose time during development, that's for sure.
It's one of the more important and underestimated parts of development. It's the reason why Microsoft Access does not solve 'everything'.
How much would an inconsistency in the data cost?
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
I just not wanting to recreate the wheel here.
I need to do the following:
1.Get two parts of the filename to use wihtin the sql. The filename look like this some name here.123456.1.txt
I need the 1 and the 123456 to populate two columns within the database table along with the data from the text file.
the text file is fixed width 1-9 is the first column 10-18 the second column etc.
I have around five hundred of these txt files I will need to loop through an process.
has anyone done anything like this is a stored proc not using ssis?
many thanks
|
|
|
|
|
Open up a console-window ([Windows Key]-[R], type cmd, press enter) and execute below command;
dir "C:\Windows\m*.log" /b >files.txt
Replace my Windows location (and the wildcard) with one that's more appropriate for you. Next step is easy, you copy and paste it into MS Access. You now have a table with filenames. You could loop those in Access, and fetch the correct file and save it with some VBA-code.
Then make a linked server (in Sql Server) to the Access database (one for generic imports like these) and simply copy the (appropriate) records over.
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
Thank you for your reply.
I need to be able to do all of thee above wihtin a stored proc without depending on any other app or tool. any thoughts? again thank you for your reply.
|
|
|
|