I don't recall ever having seen a Word document that would fit in 255 bytes.
I just created a Word document containing a single letter ("a"), saved it to disk, and found a file size of 29KB. That was Word 2007 BTW.
The only type that is suited for storing binary data IMO is a "blob".
MySQL offers blob, and some size variants thereof. Use those. I never used "binary".
Yes, saving and retrieving data to/from a database is tricky; as long as it doesn't work, it is hard to tell where the problem lies; it could be in the saving part, or in the retrieving part. And when you have several bugs at once (I'm sure you do!) fixing any one of them doesn't seem to help at all, until you get to the last one.
The good thing is, you have to solve it only once, as it would apply to any kind of data, as long as it fits a byte array model, it is all the same.
And the best thing is, millions of people have done this before, so the solution is bound to be available everywhere you look.
I need your helps now.
I setup merge replication from Server A to server B and Server C and then i want to do the transaction replication from server A to another server D but i got a problem:
Publication cannot be subscribed toby Subscriber database because it contains one or more articles that have been subscribed toby the same Subscriber database at merge level.
Changed database context to (.Net SqlClient Data Provider)
If I setup only transaction replication it is working fine but when i setup Merge replication in Server A and after that i setup one more Transaction replication then i gave me the errors as mentioned.
Actually i have one server A do merge replication to clients. And now i want to do one more transaction replication in Server A to others but it occurred errors.
can we do Merge replication and transaction replication in the server A?
Basically with merge replication when a synchronization occurs, the final state of the rows is what is merged with the other side. So if I have a stock tracking table which each stock is updated thousands of times between synchronizations only the last value of the stock will be replicated.
With transactional replication with updateable subscribers the changes (the DML) will be replicated as transactions. So if a row in our stock table is updated 1,000 times there will be 1000 indivdual transactions will be replicated.
Now updateable subscribers is being deprecated and will likely not show up in SQL 11 and peer to peer is the desired upgrade path.
So if you need transactions replicated transactionally you would want updateable subscribers, if you want bi-directional synschronization between nodes which are frequently disconnected - merge replication is the way to go.
in my application
I have three tables: user, admin, operator
each of these three can send a message to another
the message can be a response to a message sent by the other
or it may correspond to a command (because the user places orders to the admin and the admin can send a message on this order (Order approved, rejected, in process))
all of that concern an application of managment printing
my problem is to determinate how much I need to message tables (because there is a lot of messages (corresponding to order, response, simple message, which is sender and receiver....))
here is the image of my model
Can you help me or give me examples of similar cases
I wrote a simple messaging application a few years ago and used something like the following:
MessageID -- the ID of message
ParentID -- the ID of the immediate parent message
ThreadID -- the ID of the first message in the thread
SenderID -- the ID of the sender
TimeSent -- timestamp
Content... (whatever other columns you require)
MessageID -- the ID of the message
RecipientID -- the ID of the recipient
This allowed for multiple recipients for each message. I used GUIDs for IDs, but you could use INTs if you like.
Limiting the loop will slow your performance. There really isn't a nice way of doing it without resorting to pre-import queries or using a cursor. Neither choice will result in faster performance.
Proper indexes on the conditions will improve performance given the query you listed.
If you are moving lots of data into an empty table and you can fully control its integrity, then remove all constraints (except identities) and indexes on the new table - update the table - and put them back on. This can greatly speed up a large insert.
You can also take a look at BULK INSERT if your SQL version will allow.
I have a requirement to migrate the data from my standard OLTP database into an OLAP database based on the firing of a particular event. There will be some calculations performed on the OLTP data and the calculated/aggregated data will be inserted into the OLAP database.
I have 2 options to do this.
1. Use a web service to insert data into the OLTP database, make required calculations and then insert the data into the OLAP database from within the web service itself.
2. Use SSIS to migrate the data from the OLTP database into the OLAP database.
I am confused as to which option is better in terms of performance, security, maintainability as well as scalability of the application. My application deals with huge amounts of data which is required to be used for generation of reports.
I would like to delete a record in a SQL database after some time had passed if a certain field in that record has a value of No. For instance I would like to delete a record if the value of the Date field in that record is 5 days passed the timestamp date and the Returns Email field remains No.
How do I do that using the Job Scheduler in SQL? I just thought that if I can use the Job Scheduler instead of writing a stored procedure, it would save time. Thanks in advance.
The script below should help. Just modify what is required. Job step as you already know will be T-SQL
DECLARE@DaysPassedINT-- Number of days required for record deletion
SELECT@DaysPassed = DATEDIFF(DD,GETDATE() - 5, GETDATE()) -- Diff in days eg (GETDATE() - 5, GETDATE() = today - (today - 5)
IF@DaysPassed = 5-- If days paseed = 5 then...
BEGINSELECT GETDATE() -- Your delete statement can go here.
ENDELSEPRINT'I love Code project'-- If not equal to 5 then....
We need to move rows without any duplication.
But From the beginning there is no duplicates, Right ?
During our movement, we want to change a char '_' into ' ',
and this will cause some duplications , ok ?
so we want to avoid this,
when moving the rows we don't move 'product's which will be a future duplicate after this renaming : Imagine in our products list we have :
Baby_Doll ( a future duplicate ) and Baby Doll,
we have lots of other Baby_Brush, Baby_toy1 , Baby_toy2 , ...
( These are not future duplicates )
seems to be a productive way,
but could you please give the 5 lines of code you are talking about,
please also re-consider it's SQL CE, cause it lacks near everything !
Also for clarifying more of this situation please look at my previous reply on this thread,
but could you please give the 5 lines of code you are talking about,
Nope, you write the code and when you get stuck comeback and ask for help.
As it is sql ce you will probably have to do this in code, the same logic applies.
Get the original table into a list<>
get and empty list<>
loop through the data inserting the unique records into the new list<>
write the new list<> back to the new table in the database.
Never underestimate the power of human stupidity
here I rote tens on thousands lines of code which are working,
about this subject I asked I'm stuck that I've asked,
I was a bit away from dealing with pure CQL scripts, and found SQL CE a real big hassle .
Here I rote lots of scripts with similar errors, I can't do a select insert query and so on !
may I'm wrong on some pieces in each code I tried,
Here I was working on this :
SELECT REPLACE(OriginalColumn, '_', '')
FROM Table2 t2
LEFTOUTERJOIN Table1 t1 ON t1.Column1 = REPLACE(OriginalColumn, '_', '')
WHERE t1.Column1 ISNULL
but still couldn't make it to meet my Tables and columns,
What about this, or the situation I've explained ?