|
Hi,
I only used SQlite a few times, in stand-alone single-user applications. It uses SQL, so standard database operations are very similar tothose on SQLServer.
I don't know all the answers to your questions, I do trust the articles I refered to do contain the necessary information, if not uncle Google knows everything.
However if you need multi-user support, persistent data, stored procedures, IDE design, and/or more, then why not use SQLServer right away? And I fail to see how your earlier use of an XML file fits the bill...
BTW: if you have architectural doubts, I think you'd better provide an overall description of what kind of application and system configuration you are considering, rather than asking a number of technical questions without giving much context...
|
|
|
|
|
i have list1, list2 & list3
i want to do left join between list1 & list2 and equi join between list1 & list3.
looking for sample code. i search google but always getting 2 join example irrespective of equi join or left join.
please share some sample code as per my scenario i discuss here. thanks
|
|
|
|
|
|
My problem is that when there are 50 or 100 iterations in for loop and doing repeated join between list & datatable, then my code takes long time to run about 10 minute.
if (lst_Broker_BrokerBogey_collection.Any())
{
foreach (var objbrokerBrogeyData in lst_Broker_BrokerBogey_collection)
{
string brokerBogeypath = File.Exists(@Path.Combine(ConfigurationManager.AppSettings["DBPath"].ToString(), cmbTicker.Text, "###~$$$~Bogey.xml".Replace("###", cmbTicker.Text).Trim().Replace("$$$", objbrokerBrogeyData.Broker).Trim())) ?
@Path.Combine(ConfigurationManager.AppSettings["DBPath"].ToString(), cmbTicker.Text, "###~$$$~Bogey.xml".Replace("###", cmbTicker.Text).Trim().Replace("$$$", objbrokerBrogeyData.Broker).Trim()) : ""; ;
if (!String.IsNullOrEmpty(brokerBogeypath))
{
bool selectedBroker = (dtdgvBrokers.AsEnumerable().FirstOrDefault(x => x.Field<string>("Brokers").Equals(objbrokerBrogeyData.Broker)).Field<string>("Select")).Equals("True") ? true : false;
if (selectedBroker)
{
RDSS_Workbench.QcVerticalViewNew.QcVerticalViewNewProcess.BrokerCodeBrokerNameBrokerBogey objBrokerCodeBrokerNameBrokerBogey
= new QcVerticalViewNew.QcVerticalViewNewProcess.BrokerCodeBrokerNameBrokerBogey();
objBrokerCodeBrokerNameBrokerBogey.BrokerCode = objbrokerBrogeyData.Broker;
objBrokerCodeBrokerNameBrokerBogey.IsAllowEstimate = (dtdgvBrokers.AsEnumerable().FirstOrDefault(x => x.Field<string>("Brokers").Equals(objbrokerBrogeyData.Broker)).Field<string>("Select Estim.")).Equals("True") ? true : false;
objBrokerCodeBrokerNameBrokerBogey.IsHistoric = (dtdgvBrokers.AsEnumerable().FirstOrDefault(x => x.Field<string>("Brokers").Equals(objbrokerBrogeyData.Broker)).Field<string>("Historical")).Equals("True") ? true : false;
objBrokerCodeBrokerNameBrokerBogey.ReviseDate = (dtdgvBrokers.AsEnumerable().FirstOrDefault(x => x.Field<string>("Brokers").Equals(objbrokerBrogeyData.Broker)).Field<string>("Model Date"));
objBrokerCodeBrokerNameBrokerBogey.BrokerEarnings = (dtdgvBrokers.AsEnumerable().FirstOrDefault(x => x.Field<string>("Brokers").Equals(objbrokerBrogeyData.Broker)).Field<string>("Earnings"));
DataTable GeneralBrokerDt = new DataTable();
GeneralBrokerDt = objBrokerCodeBrokerNameBrokerBogey.brokerbogeycontainer.GetBrokerBogey(@brokerBogeypath).Tables[2];
List<QCHelper> qclist = new List<QCHelper>();
qclist = (from cf in cfToggleList
join broker in GeneralBrokerDt.AsEnumerable()
on new { val = cf.Section.Trim().ToUpper(), val1 = cf.Li.Trim().ToUpper(), val2 = cf.StandardDate.Replace("A", "").Replace("E", "").Trim().ToUpper() }
equals new { val = broker.Field<string>("TabName").Trim().ToUpper(), val1 = broker.Field<string>("StandardLineitem").Trim().ToUpper(), val2 = broker.Field<string>("StandardDate").Replace("A", "").Replace("E", "").Trim().ToUpper() }
into tempJoin
from leftJoin in tempJoin.DefaultIfEmpty()
where cf.Broker == objBrokerCodeBrokerNameBrokerBogey.BrokerCode
select new QCHelper()
{
Broker = cf.Broker,
BrokerName = cf.BrokerName,
Section = cf.Section,
BrokerEarnings = objBrokerCodeBrokerNameBrokerBogey.BrokerEarnings,
Li = cf.Li,
StandardDate = leftJoin == null ? cf.StandardDate : leftJoin.Field<string>("StandardDate"),
xFundCode = cf.xFundCode,
StandardValue = leftJoin == null ? string.Empty : leftJoin.Field<string>("StandardValue"),
IsAllowEstimate = objBrokerCodeBrokerNameBrokerBogey.IsAllowEstimate,
ReviseDate = objBrokerCodeBrokerNameBrokerBogey.ReviseDate
}).ToList<QCHelper>();
GeneralBrokerDt.Dispose();
GeneralBrokerDt = null;
objBrokerCodeBrokerNameBrokerBogey.brokerbogeycontainer.GneralBorgeyDt.Dispose();
objBrokerCodeBrokerNameBrokerBogey.brokerbogeycontainer.GneralBorgeyDt = null;
if (qclist.Count > 0)
lstBorkerWiseData.AddRange(qclist);
qclist = null;
objBrokerCodeBrokerNameBrokerBogey = null;
}
}
}
This is my list lst_Broker_BrokerBogey_collection in which I need to iterate. brokerBogeypath will have multiple different file location in loop.
This line:
bool selectedBroker = (dtdgvBrokers.AsEnumerable().FirstOrDefault(x => x.Field<string>("Brokers").Equals(objbrokerBrogeyData.Broker)).Field<string>("Select")).Equals("True") ? true : false;
dtdgvBrokers is datagridview and in selectedBroker variable will have true false value based on grid select value.
If selected broker has true value, then load datatable GeneralBrokerDt. GetBrokerBogey() function will load data from different xml file in each for loop iteration.
in next line
qclist = (from cf in cfToggleList
join broker in GeneralBrokerDt.AsEnumerable()
I am joining list cfToggleList and datatable GeneralBrokerDt.
Can you review it for performance and best coding practices?
|
|
|
|
|
Perhaps someone else might want to look at that in detail, but...
You are doing a loop and then doing database calls for each iteration. It will be slow because of that.
The solution is not to do a loop. Rather construct a single query that does everything at once.
Also you should note that there is no magic here and if your list gets 'big' then you will need to break the query up. If you are using MS SQL Server a 'in' clause has a limit of 2100 I believe.
|
|
|
|
|
my data store in xml file. i am not using any db. this is requirement of my module. thanks
|
|
|
|
|
Hi,
First of all, I agree with jschell.
Then, I haven't read all that, however I did notice you are wasting cycles by performing complex operations twice.
Instead of
string brokerBogeypath = File.Exists(@Path.Combine(ConfigurationManager.AppSettings["DBPath"].ToString(), cmbTicker.Text, "###~$$$~Bogey.xml".Replace("###", cmbTicker.Text).Trim().Replace("$$$", objbrokerBrogeyData.Broker).Trim())) ?
@Path.Combine(ConfigurationManager.AppSettings["DBPath"].ToString(), cmbTicker.Text, "###~$$$~Bogey.xml".Replace("###", cmbTicker.Text).Trim().Replace("$$$", objbrokerBrogeyData.Broker).Trim()) : ""; ;
which is hard to read and understand, you'd better do
string brokerBogeypath = Path.Combine(ConfigurationManager.AppSettings["DBPath"].ToString(), cmbTicker.Text, "###~$$$~Bogey.xml".Replace("###", cmbTicker.Text).Trim().Replace("$$$", objbrokerBrogeyData.Broker).Trim());
if (!File.Exists(brokerBogeypath)) brokerBogeypath="";
which is both more readable and faster.
Next, you have a lot of calls to ToUpper(), Trim(), and Replace(...); each of them will cost a fwe microseconds, which accumulates when executed a lot of times. You probably could reorganise them so they are each required only once, maybe at the expense of some extra storage.
Same for ConfigurationManager.AppSettings["DBPath"].ToString()
Finally, I would suggest shortening some of the identifiers in order to improve readability; this won't change execution speed however it may make it easier for you (or others) to see what is really going on...
|
|
|
|
|
|
You're welcome.
|
|
|
|
|
see the code first
var cfToggleList = (from cf in QCHelperall
join toggle in ToggleDataAll
on new { val = cf.Section.Trim().ToUpper(),
val1 = cf.Li.Trim().ToUpper(),
val2 = cf.Period.Replace("A", "").Replace("E", "").Trim().ToUpper(),
val3 = cf.Broker.Trim().ToUpper() }
equals new { val = toggle.Section.Trim().ToUpper(),
val1 = toggle.Lineitem.Trim().ToUpper(),
val2 = toggle.Period.Trim().ToUpper(),
val3 = toggle.Broker.Trim().ToUpper() }
into tempJoin
from leftJoin in tempJoin.DefaultIfEmpty()
select new QCHelper()
{
Broker = cf.Broker,
BrokerName = (from brk in BrokerCodeName
where brk.BrokerCode.ToUpper() == cf.Broker.Split('-')[0].ToUpper()
select brk.BrokerName).FirstOrDefault(),
Section = cf.Section,
Li = cf.Li,
StandardDate = cf.Period,
Value = cf.Value,
FycheckToggle = leftJoin == null ? string.Empty : (leftJoin.ToggleText.Contains("FYCHECKTOGGLE") ? leftJoin.ToggleText : string.Empty),
QcCheckToggle = leftJoin == null ? string.Empty : (leftJoin.ToggleText.Contains("QCCHECKTOGGLE") ? leftJoin.ToggleText : string.Empty)
}).ToList<QCHelper>();
There is left join between two list called QCHelperall & ToggleDataAll and for each iteration i am fetching data from another list based on match. i am talking about this code
BrokerName = (from brk in BrokerCodeName
where brk.BrokerCode.ToUpper() == cf.Broker.Split('-')[0].ToUpper()
select brk.BrokerName).FirstOrDefault(),
can i join BrokerCodeName with QCHelperall list with equi join. if possible then please give me code where one left join will be between QCHelperall & ToggleDataAll and one equi join between BrokerCodeName with QCHelperall in same place. i believe if i can do it then code execution will be must faster.
thanks
|
|
|
|
|
You forgot the kitchen sink.
"(I) am amazed to see myself here rather than there ... now rather than then".
― Blaise Pascal
|
|
|
|
|
sorry do not understand what you try to mean. please guide me how to restructure my code.
|
|
|
|
|
Look at all the calls to Trim and ToUpper you've got on nearly every field. This causes two string operations to be performed PER FIELD ON EVERY RECORD. Those operations take time and the allocation of new memory for the results of these operations. You've added a LOT of overhead just with those operations.
If you have to "normalize" your data in the query, the design of your database needs to be looked at and modified to normalize data during INSERT, not during the query. The design of the database should be done with querying performance in mind, not just a place to store data.
|
|
|
|
|
sir please guide me how to restructure my code as a result i will get best performance. thanks
|
|
|
|
|
This is not about restructuring your code. This is about restructuring your entire application! Database, code, and data handling rules, ... everything, with query performance in mind. This is going back to the beginning and starting over.
Why are earth are you putting data into the database that requires you to .Trim the data that comes back out? Or to even convert data to upper case characters to make it useful in the query? This is stuff that has to be done when you INSERT the data into the database, not when you pull it back out! This is about changing the way you think about writing data applications, from scratch.
|
|
|
|
|
i am not using database in project. my module fully depend on xml file. i use dataset & xdocument class to load data from xml. i use Trim() & ToUpper() if data has space or not in same case. just to match the data properly i use Trim() & ToUpper() function.
please tell me what is wrong with Trim() & ToUpper() functions ? what wrose can happen if we use Trim() & ToUpper() function? please guide if possible for the bad consequences of Trim() & ToUpper() function. thanks
|
|
|
|
|
Nothing is wrong with them.
Just when you go to do your query, you're going it in memory in a DataSet or DataTable. Those are not really optimized to do queries against. There are no indexes in a DataSet or DataTable. So when you go do your query, you're doing what's called a "table scan". You're looking and comparing every single row in every table involved in the query. You're also running both Trim and ToUpper on every single field you specified them on for every single row in all of your related tables. You're calling those methods literally tens of thousands of times, which is going to slow your query speed considerably.
An XML file is NOT a database. XML is used to format data for transfer. If you're trying to use an XML file as a database, you're making a huge mistake.
You could improve query performance while still using an XML file, but you're going to be writing code to
* custom build a DataSet, with all of your related DataTables,
* importing the data from the XML file with custom code, including calling Trim and ToUpper to normalize the data,
* creating DataView objects on your tables, and sorting them appropriately which will create indexes that will improve query performance.
But, you're never going to get the performance of a dedicated database engine that's built for the purpose of querying.
|
|
|
|
|
Hopefully you own the database because it REALLY stinks, you are making joins on text fields which you have not validated on the entry. It also seems like you have no referential integrity in your database other than user input.
In other words you have created a disaster in your database. As Dave said you need to go back to the beginning and redesign your database so it conforms to proper data structures. This site may help you Data Models[^] but I suspect you need to get a book and start learning, it cannot be done via forum posts.
Never underestimate the power of human stupidity -
RAH
I'm old. I know stuff - JSOP
|
|
|
|
|
Sir i am not using db rather using xml file from where i read data. thanks
|
|
|
|
|
How to make code run differently depending on the platform it is running on?
I have a challenge for you.
I have an ASP.NET Web Application built in C# with Microsoft Visual Studio 2017. There is a line of code that we have had to include in order for it to run on a localhost when debugging by attaching it to a running process in a web browser.
Here is the issue. That line of code is not necessary when the process is running on the server.
I can remove that line when I submit the code in TFS. But it would be great if we could make the code somehow ignore that line depending on what platform it is running on.
How can you think this could be done?
|
|
|
|
|
|
|
I would question why that line is needed at all. Perhaps your real problem is figuring out what process to attach to when you want to debug. If so you can log the process id. That provides all that you need.
Other than that there is a problem with running code one way on some machines and then running it differently in production. That is because you are not then testing the code that runs in production. Consequently problems can occur in production for which you have no way to replicate exception in production.
|
|
|
|
|
my project dont buil. my code is true
|
|
|
|
|
This is not a good question - we cannot work out from that little what you are trying to do.
Remember that we can't see your screen, access your HDD, or read your mind - we only get exactly what you type to work with.
But ... if your code was "true" (we would say "correct") then it almost certainly would build, so you need to start by looking at what it is telling you as to why it won't. Assuming that you have actually tried to build it - and you'd be surprised how many don't realise you have to press a button or a keyboard key to do that - then Visual Studio tries very hard to give you understandable error messages.
Start by looking at two places (both accessible from the VS "View" menu if you can't see them):
1) The error pane. This contains a list of all the problems VS found when you tried to compile. If this is empty, then the chances are it compiled OK. If it isn't read the message and double click it - that will take you to the line it didn't like. Even Warnings should be checked if you are a beginner, they are there because you are doing something that is probably wrong!
2) The output pane. This shows a summary of the build process, and also any problems that occured during run time if you got that far.
Assuming that Errors is empty and Output looks fine, you need to look at your code and exactly what is actually happening.
If you can't work it out, show us your code, and any messages, and give us loads of detail as to what you tired that produced it - the better your question, the better our response can be. Typing as little as possible helps nobody!
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|