|
Can anyone help me write the answers to this question in sql, I couldn't manage to upload, a full question, can anyone help with a simple demonstration on this question based on any assumption
I) Find the Employers name, address of employers and the number of
students sponsored by each employee.
iv)
Find the average fees of students sponsored by employer with number 02
|
|
|
|
|
I guess no one could help you without seeing the database structure and the exact problem you have to do it yourself.
|
|
|
|
|
As Victor said, there is no way anyone can help without knowing what the table structure is.
Where are you stuck?
|
|
|
|
|
Have you read point #11 here?
"One man's wage rise is another man's price increase." - Harold Wilson
"Fireproof doesn't mean the fire will never come. It means when the fire comes that you will be able to withstand it." - Michael Simmons
"You can easily judge the character of a man by how he treats those who can do nothing for him." - James D. Miles
|
|
|
|
|
You need to:
select from employers
join to students
group by
count
Then join that query to fees
group by
avg
Seriously if this is beyond you then withdraw from the course.
Never underestimate the power of human stupidity -
RAH
I'm old. I know stuff - JSOP
|
|
|
|
|
I'm working on a personal app to help me better manage my family's car maintenance schedules. These are two of the tables being used and their relationship: Schema. For a given car, a schedule is created where at least one row is added to the service_schedules table and multiple rows are added to the schedule_intervals table depending on the options chosen.
One example would be if a "rotate tires every 7500 miles" schedule was created, one row would be added to the service_schedules table and 33 related rows would be added to the schedule_intervals table, one for each of the 7500 mile intervals up to 250000 miles.
Another example would be if a "change oil every 25000 miles or 12 months" schedule was created, one 'mileage' row would be added to the service_schedules table and 10 related rows added to the schedule_intervals table, one for each of the 25000 mile intervals up to 250000 miles. A 'time' row would also be added to the service_schedules table and 15 related rows added to the schedule_intervals table, one for each year up to 15 years.
With this schema, I can easily do 'past due services' and 'upcoming services' queries. For example, I can find all upcoming services for vehicle 4 with current mileage of 163451 using:
SELECT * FROM schedule_intervals WHERE schedule_id IN (SELECT _id FROM service_schedules WHERE vehicle_id = 4) AND (163451 < mileage OR 1621540799076 < date) For vehicle 4, it has six service schedules for things like oil/filter (mileage or date), tires (mileage), spark plugs (mileage), transmission (mileage), etc, all of which create 155 rows in the schedule_intervals table. Using the above query, 114 rows are returned for any upcoming services. The issue I'd like to resolve, if possible, is to only show one of each service type (one of mileage and one of date). In other words, instead of showing all of the oil/filter services that are due (175000, 200000, 225000, and 250000, April 2022, April 2023, April 2024, ..., April 2036), I'd like to just show the next one of each type (175000, April 2022), as all other ones past that are irrelevant.
At this point, I don't know if I need to modify either of the two tables, and/or add a bit more complexity to the query.
Thoughts or ideas?
Thanks.
DC
"One man's wage rise is another man's price increase." - Harold Wilson
"Fireproof doesn't mean the fire will never come. It means when the fire comes that you will be able to withstand it." - Michael Simmons
"You can easily judge the character of a man by how he treats those who can do nothing for him." - James D. Miles
|
|
|
|
|
You need to make the query quite a bit more complex. Probably half a dozen based on each vehicle and service type and the date and the mileage
Select Top 1 fieldname from tablename order by (date or mileage)
filter by vehicle and service type.
You can then either pivot the results or accept multiple rows.
Never underestimate the power of human stupidity -
RAH
I'm old. I know stuff - JSOP
|
|
|
|
|
Hello everyone .
I have an accounting calculation problem. I want to write it with SQL Query (in ssms).
I have two groups of documents related to one person (creditor and debtor)
Creditor documents cover debtor documents.
Consider the following example: (How can the result be achieved?)
USE [master]
GO
DROP TABLE IF EXISTS #credit,#debit
SELECT *
INTO #debit
FROM (values
(88,'2/14',1,5,1),(88,'2/15',2,5,1)
)A (personID,DocDate,DocID,Fee,IsDebit)
SELECT *
INTO #credit
FROM (values
(88,'2/16',3,3,0),(88,'2/17',4,7,0)
)A (personID,DocDate,DocID,Fee,ISDeb)
SELECT * FROM #credit
SELECT * FROM #debit
;WITH res AS
(
SELECT 88 AS personID ,1 deb_DocID ,5 deb_Fee , 3 Cre_DocID ,3 Cre_Fee, 0 remain_Cre_Fee
UNION
SELECT 88 AS personID ,1 deb_DocID ,5 deb_Fee , 4 Cre_DocID ,7 Cre_Fee, 5 remain_Cre_Fee
UNION
SELECT 88 AS personID ,2 deb_DocID ,5 deb_Fee , 4 Cre_DocID ,7 Cre_Fee, 0 remain_Cre_Fee
)
SELECT *
FROM res
|
|
|
|
|
What do you mean by cover
Member 14006806 wrote: Creditor documents cover debtor documents.
Never underestimate the power of human stupidity -
RAH
I'm old. I know stuff - JSOP
|
|
|
|
|
I guessing that by Quote: Creditor documents cover debtor documents. you are implying that there may be more than one credit document to cover debt documents.
You are essentially trying to keep a running total (hint - good thing to google for).
But first you have to get your data into a useable form. In the example below I am going to DocID to indicate the order in which the documents were received because the data you have in columns docDate is not a date and I can't use it in an order by clause.
The first step is to get all of the documents into a single result set - I'm not sure why you have columns ISDeb /IsDebit when you can tell what they are by what table they are in. Personally, I would have had one transaction table with all types of transaction in the one place, with debit values negative and credit values positive.
But as you have a different model, you will need to start with something like this ...
select personID, DocDate, DocID, (-1) * Fee As Fee, IsDebit
from #debit
union all
select personID, DocDate, DocID, Fee, ISDeb
from #credit
There are several ways to calculate a running total e.g. you could adapt one of the solutions from this post[^]
;with combine as
(
select personID, DocDate, DocID, (-1) * Fee As Fee, IsDebit
from #debit
union all
select personID, DocDate, DocID, Fee, ISDeb
from #credit
)
,CTE
as
(
select T.personID, T.DocDate, T.DocID, T.Fee, T.Fee as running_total
from combine as T
where T.DocID = 1
union all
select T.personID, T.DocDate, T.DocID, T.Fee, T.Fee + C.running_total as running_total
from CTE as C
inner join combine as T on T.DocID = C.DocID + 1
)
select C.PersonID, C.DocDate, C.DocID, C.Fee, C.running_total
from CTE as C Which gave results
PersonID DocDate DocID Fee running_total
88 2/14 1 -5 -5
88 2/15 2 -5 -10
88 2/16 3 3 -7
88 2/17 4 7 0
This strikes me very much as homework, so just be aware that your tutor probably knows about this site as well
That still doesn't get the results you want - you need to start looking at PIVOT to get credit fee and debit fee on the same row - but for that you are going to need a way of linking the two together i.e. this credit value is to cover debit X
|
|
|
|
|
I have a situation where when I run this stored proc on the live database I get a timeout error.
if I copy the two tables into a different dead database where there are no transactions happening,
then I get a result instantly
the table Jn has over 2 million records.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
has not helped. Even WITH NOLOCK
has not helped.
any ideas pls ?
stored proc is below
<pre>create PROCEDURE [dbo].[mysp]
@sRefDateFrom AS CHAR(8) , @sRefDateTo AS CHAR(8)
AS
BEGIN
SET NOCOUNT ON;
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
SELECT A.FormatCode, A.AcctName , SUM(J.Debit) AS Debit, SUM (J.Credit) AS Credit
FROM JN J
JOIN AC A
ON A.AcctCode = J.Account
WHERE J.RefDate >= @sRefDateFrom
AND J.RefDate <= @sRefDateTo
GROUP BY A.FormatCode , A.AcctName
ORDER BY A.FormatCode
END
|
|
|
|
|
Make sure you have an index on RefDate. Probably another one on FormatCode and AcctName.
Beyond that, you need to use the database tools to track down which part is taking the most time. If it is Microsoft Sql Server then you can use the Estimated Execution Plan and other tools.
|
|
|
|
|
Hi,
I have a question about choosing an in-memory database.
Within a Java application, we need to use an in-memory database. This is a requirement of our customer : for reasons of privacy and personal data processing, he does not want the data to be stored on disk but only in memory. However, we also need a relational database because we need to store multiple information having relationships between them, we need to manage ACID transactions and we also need to store binary data (like photos). Finally, we also need high availability, if a server falls, another server must be able to respond and therefore have the same data (always in memory).
What do you think is the best database that would meet these constraints?
I took a look with popular relational in-memory databases :
HSQLDB does not seems to have redundancy feature,
H2 seems to have redundancy if used as a database server but not if used as an embedded in-memory DB,
and some people told me that Apache Derby was slower
Thanks for your help
|
|
|
|
|
Mathieu Seillier wrote: for reasons of privacy and personal data processing, he does not want the data to be stored on disk but only in memory.
Caveat - I'm not up on in memory DBs but it sounds like your client has been fed some really good kool-aid. I'd be astonished is in memory improved your security above a well set up and encrypted relational DB.
You had also better have a reasonably small user base if you are storing a DB of images in memory or a hell of a lot of memory. What about DR and backups or are they expecting never to have a problem with their primary servers.
Never underestimate the power of human stupidity -
RAH
I'm old. I know stuff - JSOP
|
|
|
|
|
SQLite can handle in memory databases, but that will certainly limit the amount of data that can be processed. And what happens when the application ends, where does the data go then? There are far better ways of securing data than this idea.
|
|
|
|
|
The reason they ask for In-Memory OLTP is Quach. If the Durability is SCHEMA_AND_DATA then a copy of memory exists on disk. In-Memory is not a security solution.
I suggest you, to listen to their needs and plan a suitable solution. Ask them, as many as question you need to realize what they really want. Some time they can't explain the real things, that they are worry about.
SignatureNotFoundException
|
|
|
|
|
Please don't repost if your comment does not appear immediately: all of these went to moderation and required a human being to review them for publication. In order to prevent you being kicked off as a spammer, all three had to be accepted, and then I have to clean up the spares. Have a little patience, please!
I've deleted the other versions.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Thank you for the clean up. I didn't repost. It was a bug. When I pressed the Post bottom, then no action happens. after a while I try again. I think because of my poor internet connection that happens.
SignatureNotFoundException
|
|
|
|
|
For example i have two pc in a LAN. pc1 has sql server installed and management studio also but pc has only management studio. when i execute a SP by management studio in pc1 then output comes after 4 minutes but when i execute the same SP by management studio in pc2 then output comes after 11 minutes. both pc in same LAN then why pc2 takes long time to show data.
please suggest some guide line to diagnosis this problem. how to figure out where the problem lies?
how to capture is it sql server issue or network issue ?
please help in such a way as a result i should be able to capture where the problem is. thanks
|
|
|
|
|
|
What are the record counts (result set)? Everything else is just guessing at this point.
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food
|
|
|
|
|
Record count may be irrelevant as the SP is probably doing a bunch of processing.
Never underestimate the power of human stupidity -
RAH
I'm old. I know stuff - JSOP
|
|
|
|
|
He's comparing "on server" with "client and server". He says the client setup runs "slower".
What role is the client playing?
Do you know how many records he's downloading and what the impact is?
Does he report progress on the first record (async); or does he download an entire result?
...
(I find one "open" question is often better (for OP) than a barrage of detailed ones).
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food
|
|
|
|
|
I have a large parameterised SQL query, on SQL Server
When I run it in SQL Server Management Studio (with SQL, doh) or in LinqPad (use C# Linq to Sql) then it runs fast.
If I ran it meself in my console app whether with EFCore/Linq Query or even a SqlCommand (and SQL Text) it is horrendously slow... even though I use the same parameter for every tests... (you know, if it created a stupid index for the query or something)
I have no clue what else I can do to check/fix that...
modified 28-Apr-21 1:25am.
|
|
|
|
|
Some good suggestions here:
Slow in the Application, Fast in SSMS?[^]
I've often found the connection settings cause this - particularly the ARITHABORT setting, which is ON for SSMS, but OFF for ADO.NET.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|