Just got an email. They are asking for full disclosure of common clients and queries...nothing threatening, actually quite positive. This is fair enough, and probably should have been in practice before. As for the missing data, it was explained as an internal issue. Apology accepted. We'll see where it goes from here. Think positive.
But did they actually use the phrase "Full Disclosure"?
Doesn't that sound like their lawyer asked them to get the list?
Think carefully about how you answer them. If your company has a lawyer I'd run it by them. The developer threatened legal action and now wants this piece of information. Perhaps you should take the threat seriously.
Give a man a mug, he drinks for a day. Teach a man to mug...
Unless it is spelled out differently by contract, I'm not sure how the data ever belongs to the vendor. And I wouldn't sign such a contract with a vendor, I'd tell them to pound sand.
Most of my customers business IS their information or tied directly to it. If the vendor owns it then the vendor can tell you that if you migrate away from their product that you can't take the data with you. Who would agree to such a thing that was thinking?
This would be like Microsoft insisting that they own your documents because you wrote them in Word.
I also wouldn't give these guys a list of common customers. They are considering a lawsuit and they want to have 'evidence' that you've been tampering with their clients and they want a way to calculate damages- so they can decide to move forward, or not.
Give a man a mug, he drinks for a day. Teach a man to mug...
Hah! Seems they have done some homework...I got an email this morning with a sql script attached showing all the queries used against their database...must have started a trace. The email informed me that the queries passed QA. A half-hour later, I get another email stating that our request for access their system has been approved pending the acceptance and signature of a formal document. The document simply states that we acknowledge and respect their database and all objects contained therein or created thereby as intellectual property of the vendor. We also agree to identify them as the source of any data retrieved from their system, (already in place) and to destory any backups or databases we may have obtained from the client. (sure thing) As a sidenote, the email stated that 'clients have a favorable opinion of your company and consider the connector to our system as 'integral' to their business objectives'. I have a new contact with their dev team and a promise of cooperation should I need it. We are reviewing the document now. I may have to delete this thread as it is now showing up in google searches on the subject!
This seems to answer the question. The vendor owns the database...structure and objects. The client own's the data in the database. Clients also own the SQL Server license and contol access to databases. In most cases, whether or not they know it, clients have accepted a EULA that prohibits them from sharing software components (including databases) with third parties without written consent. I believe that a contract is a good idea to protect all three parties involved.
ok...I've got the thing deployed, but I'm trying to run it from a job. If my job is running in a database called "dev" and a package named "foo" is running in Integration Services on "dev", how would I go about referencing it in my job step command?
Looking at my @command assignment, is anything missing?
I have the thing deployed to MSDB\Maintenance Plans, but I really don't know if the reference is as simple as this.
..Is it? I'm getting an "XML Parsing error"(<-pseudo) that seems to be a permissions error when trying to open the package. Either the package doesn't exist at the location specified or my account doesn't have access to the file...I'm pretty confident that I have access to the file, so my other logical option is that the context that I have provided in my snippet above is incorrect.
Does anyone know how I should go about referencing it?
Thank you very much for the documentation, but I had already referenced the materials at those links. My specific question was surrounding the command-line reference that I was making to my package after it had been deployed.
...at any rate, I found out what my problem was. I was attempting to reference the package with the FILE switch, but I had not deployed the package to the file system of the database server which I was attempting to reference (I had deployed it directly to the SQL Server instance)
...I still am unsure of how I would go about referencing the package from SQL Server directly, but I repeated my deployment, this time to the file system, and adjusted my reference to the location on the file system, and the package was able to be loaded at run-time.
just fyi...anyone referencing a package with the "FILE" switch, make sure your package is deployed to the file system lol ...it'll save you a night's sleep.
1. Learn basic .Net
2. Learn about ASP
3. Learn basic database including SQL. This has nothing to do with steps 1/2.
4. Learn how "blobs" are stored/retrieved in the database. Nothing to do with steps 1/2
5. Learn to use SQL in .Net.
6. Put the above together to create a program that does what you want.
So we've got a single Solicitations table that logs all outbound solicitations that get wrapped up into a file and dropped on our FTP site as well as holding columns for handling the inbound processing when the records make their roundtrip back to our system.
So, for instance:
Table has columns
GUID, AccountID, AccountNumber, PhoneNumber, City, State, & Zip are stored in the Log table on outbound processing, with the inbound columns defaulted to NULLS.
When the information comes back into our system after the Marketing call has been made, it will return all of the data that we sent out plus the remaining fields all in a csv format.
So, a record might go out looking like this:
'1234-5678-ABC-BLAH',5,'666321234','800-123-4567','Chasey','Lane','976 Gloryhole Ave','Los Angeles','CA','66699'
and it will come back looking like this:
'1234-5678-ABC-BLAH',5,'666321234','800-123-4567','Chasey','Lane','976 Gloryhole Ave','Los Angeles','CA','66699',170,'Sealed the Deal','Ron Jeremy',07/06/2011
For the inbound processing, I want to create an SSIS package with a Data Flow that uses a Flat File Source to pick up the inbound file and have an OLE DB Destination execute a SQL Command that maps the inbound fields to the specific columns that are coming out of the csv file.
I'm basically to here:
UPDATE Solicitations SET Code = ? , CodeDescription = ? , Agent = ? , Date = ?
WHERE GUID = ?
AND AccountID = ?
AND AccountNumber = ?
How do I reference the Flat File Source in my command? How do I assure that the correct parameter values are being pushed into the command in the correct reference positions? Am I overthinking the problem?
For deadlock detection I am using SQL Server Profiler to detect locks. Also I run a script to detect the longest query time execution and see what is actually happening in my procedure.
Also check this article. I think it will be more useful than to run SQL Profiler.
We live in a Newtonian world of Einsteinian physics ruled by Frankenstein logic
I would like to know which of the following options will produce the fastest transactions (and why). Unfortunately, I don't know enough about the low-level "nuts and bolts" of database operation to really have a good intuition on this.
My PHP script resides on one server, my MySQL database resides on another. I believe that by performing operations on a regular MyISAM table, I am incurring time costs related to both 1) communicating between the web server and the database server, and 2) performing disk operations because of MyISAM is disk-based storage engine. I do not, however, know how much of the time cost is associated with each of these factors or if both are really significant (in the sense of important).
I have some temporary data that I want to manipulate on a per-session basis, and the way I see it I have two options:
I can create a local memory database and create a table there.
Which one of these will be faster to perform queries on? Will there be a noticable difference? My intuition is that the second will be slower because it is a transaction on a non-local database, but I don't know this for sure. When the table is in-memory, which computer's memory is it actually in? How much volume would have to be going on for a difference to actually be noticable?
Any thoughts would be very much appreciated.
Last Visit: 1-Dec-20 23:58 Last Update: 1-Dec-20 23:58