|
90% of family tech support is now managed by my wife, I enter the scene only when SHTF or a network must be set up (she's perfectly capable of connecting any device to a running network).
She is way more patient than me in help desk. And for stuff on smartphones I too end up asking her she uses hers much more than I use mine.
GCS d--(d-) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
That's why I don't tell my family that I work with computers. I've told them that I am a drugdealing pimp, and they seem to have believed that...
Anything that is unrelated to elephants is irrelephant Anonymous
- The problem with quotes on the internet is that you can never tell if they're genuine Winston Churchill, 1944
- Never argue with a fool. Onlookers may not be able to tell the difference. Mark Twain
|
|
|
|
|
I love the "after you installed the latest update on my phone the microwave stopped working. Can you undo whatever it is you did?"
cheers
Chris Maunder
|
|
|
|
|
|
I didn't realize Ovo Energy read that cartoon!
UK[^], Everyone else[^]
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
I didn't realize that Calvin & Hobbes had restarted. I guess my complete edition is no longer complete!
EDIT 1: High of -17C (2F) here today.
EDIT 2: Looks like they're reruns and that it hasn't restarted.
modified 11-Jan-22 11:33am.
|
|
|
|
|
Quote: The last strip of Calvin and Hobbes was published on December 31, 1995.
from here
I also have the complete edition (and beeing proud of it), perhaps a reminder to have a look at it again ...
|
|
|
|
|
GoComics runs comics on a loop.
You read 50 years of Peanuts, 10 year of C&H. C&H just recently (well a few months back) started from the beginning again.
// TODO: Insert something here Top ten reasons why I'm lazy
1.
|
|
|
|
|
|
Damn entitled kids.
The less you need, the more you have.
Even a blind squirrel gets a nut...occasionally.
JaxCoder.com
|
|
|
|
|
No, just smart, I wish I could see those comics more often.
|
|
|
|
|
So I'm editing some Crystal Reports again (excuse my inappropriate language).
I have this one project that still uses them.
I've searched for alternatives plenty of times, found stuff like DevExpress reporting, and heard good stuff about that one too.
Not that I'm going to rewrite all reports for this particular project, but maybe for a future project.
However, for other projects I don't even bother anymore and simply use MigraDoc and create PDF files in code manually.
Writing stuff like:
frame = section.AddTextFrame();
frame.Width = "12cm";
frame.Left = "10cm";
frame.RelativeHorizontal = RelativeHorizontal.Margin;
frame.Top = "6cm";
frame.RelativeVertical = RelativeVertical.Page; It's not ideal, but it still beats CR (but pretty much anything would).
To me, the whole reason to use a report generator like CR is that your clients can create their own reports.
Kind of like a no-code solution for your reports.
In practice, however, clients don't understand these tools anyway and still ask me to change reports for them.
Meanwhile, a bit of code reuse ensures your reports have the exact same headers, footers, etc. while not being all that much harder for me (or even lots easier in case of CR).
Thoughts? Generator vs. in-code? Your generator of choice?
|
|
|
|
|
I have used and would use Devexpress again.
Why? Because of actual functioning support.
|
|
|
|
|
Got a project with DevExpress. No real opinion on it besides that you should never use it - but that goes for any reporting tool that mess with the database.
For anything somewhat serious you need a layer between the user and the database. Luckily we have "somewhat of a layer", so our users are not completely tightly coupled with the database structure. This is more by luck than design as the original developers had no clue you should never do that - I guess that happens when you pride yourself with only employing the smartest people.... but then ignore the lack of experience . Unfortunately this layer means "try reading everything into memory, then combine it at runtime" if you do not know exactly what you are doing... oh well... Most reports customers create still executes in less than 24 hours.... not all of them though.
Luckily we have a couple of non-developers who knows there way around the tooling (better than us developers).
Sure you could probably do some things with database views to decouple.. but... ehh... it's 2022, can we please start working on top of APIs thank you very much.
I hope this nonsense goes away and we can offload to PowerBI and similar in the future. But our customers can't just throw all the data in the cloud, so getting too many on-prem dependencies are also problematic.
|
|
|
|
|
lmoelleb wrote: No real opinion on it besides that you should never use it - but that goes for any reporting tool that mess with the database.
Funny comment.
It's not like as if you can't use it with a model instead of a database.
|
|
|
|
|
Could be - in our project they "saved time" by using the same model across all tiers - so model and database is pretty much the same crap. There probably is a better way to build it with DevExpress reporting, I am not going to look for it though - it will be booted out instead of being corrected.
|
|
|
|
|
Seems to me that your problem isn't Devexpress, but the implementation of it.
Anyway...
Sometimes the best solution is to start over, and when you do, use the tools you know best.
|
|
|
|
|
Planned to start later this year. Yes, implementation is a bigger problem than DevExpress.
But let's just say I am not impressed with libraries that use catch {} (hours of my life I will not get back), nor am I impressed with ORMs that defaults to deferred deletion so in-experienced developers start using it without understanding how it is implemented (that was fun getting a large in-production database cleaned up). It does of course help the people writing the ORM at least knows how a guid is sorted by various databases... oh wait... they don't (or at least didn't on the version I looked at it).
|
|
|
|
|
Never knew there was more than one way to sort a GUID.
Googled it.
|
|
|
|
|
Wow! I had the same revelation.
Actually it never occurred to me to sort GUIDs. In my mind they were the epitome of randomly assigned, meaningless numbers. Why sort them?
Mircea
|
|
|
|
|
They are generated sequential because random data is horrible for efficient indexes in large databases. Earlier versions of guids where constructed with the mac address + timestamp + a random component. Due to security concerns of leaking mac addresses (real or not) and the increase of software generated mac addresses this was changed. But some databases do - for compatibility reasons - keep the ordering based on the bytes that used to contain the timestamp.
|
|
|
|
|
I think my logic and database logic are slightly different. Good thing we don't meet very often
If you need "sequential GUIDs" why not just use a counter? Peano's Axioms basically say: if you've got enough bits, I've got enough numbers. If you want to make them unique over a number of different computers just add a computer GUID. In essence that's what version 1 and version 2 GUIDs do, except that they use a supposedly unique MAC address.
Now I'll go my merry way trying to keep away from databases as much as I can
Mircea
|
|
|
|
|
I inherited the project designed this way. One of the reasons given was a request to be able to clone the database and update it in different locations (no, the whole world does not yet have internet), but obviously that was never realized - and the problems that would need to be solved to support data merge are so huge that dealing with integer ids at the same time is not exactly making it a lot worse.
The projects where I was responsible for the initial data design use integer ids - though if there is a specific use case that is served by using a guid as primary key in a low volume table I will of course do it. Typically anything above the data layer I simply lie and claim the ID is a string (or a struct of some kind) no matter what it is in the database. I do not want a UI to break because of something as trivial as changing primary key format in a table. Sure, gives a bit more conversion, but nothing that compares to the time spend waiting for data from the database.
|
|
|
|
|
lmoelleb wrote: random data is horrible for efficient indexes in large databases
Not necessarily so, it would depend on how the database is used.
Random keys are good for writes.
If you have a sequential key all inserts happens in the same leaf node at the end of the index which leads to waits because of pagelocks. Same goes for updates since most updates happens on fresh data.
If you use a random key, all writes happen at random places which means pagelocks are more seldom a problem. Also, the index tend to stay balanced.
The drawback with random keys is that they cause page splits.
On a simple index this isn't a problem since it would happen every hundredth insert or so, but If the table is clustered the amount of page splits could probably cause serious performance issues.
So, don't cluster a table on a random key since the drawbacks are serious and the advantage (index scan) is gone.
|
|
|
|
|
We use Telerik, calling stored procedures that provide the data. The stored procedure, like a view, isolates us somewhat from the actual table and column names.
|
|
|
|