|
Yep, I had the same experience with their editing team.
At one point they tried to make me use this god awful tool on-line in a browser to edit my copy, it was horrendous.
I later found out from another source, the reason they wanted to use the online tool was because they had complete control of it, they didn't like sharing manuscripts via sources they couldn't lock up at a moments notice should they wish to.
I didn't get the slave treatment though, but over the course of a year and a half they assigned me 3 different copy editors, all of which had completely different ideas on layout, style and wording
I did start an approach with APress on a project, but they disappeared and ghosted me after the first 3 meetings, never tried O'Reily but I have been told in the past if I work with any of the top tier I.T. publishers, they are the best of the bunch.
|
|
|
|
|
So, my experience was in 2018-19. When was yours?
O'Reilly has an online tool called Atlas, but Atlas is pretty wonderful.
There's some author evangelist from Packt India sending me emails promising to make everything better for future authors. I wished him luck, but said it was too late to improve my bad interactions with Packt. There are just so many authors who have had bad experiences with Packt, I wonder how they get new content.
|
|
|
|
|
Mid 2016 thru to mid 2018, but I heard plenty bad stuff before that, yet still elected to work with them against my better judgement, only when they realised they where not going to release the book on the exact same day core 3.1 dropped did they try to wise up.
If the headhunter is Alok Dhuri, tell him Shawty says hi and where's my whiskey
As for new content, if you look at many of the new titles all the author names are indian. There's very few now that are not.
|
|
|
|
|
You start as a specialist and evolve into a generalist, and then find there are now multiple specialties.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
I feel that any training, online or offline is like a springboard, familiarizing one with the buzzwords and showing a few levels of "Hello World" projects.
The actual learning starts after the training ends.
|
|
|
|
|
Amarnath S wrote: I feel that any training, online or offline is like a springboard, familiarizing one with the buzzwords and showing a few levels of "Hello World" projects.
The actual learning starts after the training ends. I agree. All the reading and watching and lecturing are just preparation -- the "doing" brings the basics together, and then growth begins from that foundation.
|
|
|
|
|
They can only really teach the basics because developer work is so fractured in terms of what's available. There are dozens of languages, and hundreds of platforms and libraries that "make your job easier". There's no way anyone can teach that.
The rest of what you learn is from experience that you gain from having to perform certain programming tasks and the tools that you are required to use to perform those tasks. Many of those considerations are determined by the company you work for.
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
Yes, that's good insight. Even as a hobby, everything I've coded has involved different things.
|
|
|
|
|
There are now numerous ways one can learn to develop web applications, making this part of the development profession quite daunting for new comers.
Right now you have the following options to choose from...
Java Servlets
Java Framework
Pure PHP
Microsoft.NET PeachPie PHP Extension for Visual Studio (makes your PHP source-less)
Microsoft ASP.NET MVC
Microsoft ASP.NET Blazor Client-Side
Microsoft ASP.NET Blazor Server-Side
HTML, CSS, JavaScript combos
To get up and running quickly, Pure PHP is probably the best and easiest route to take. PHP web development makes up to close to 30% of all web sites today. W3 Online Schools probably have the best a basic courses of anyone (read and try with their built in editors... https://www.w3schools.com/
Next up, I would recommend Microsoft ASP.NET Blazor Server-Side as this is a very credible framework to work with. There is a lot of available information on the Internet but probably the best place to start is at https://docs.microsoft.com/en-us/aspnet/core/blazor/?view=aspnetcore-6.0
Note that all of the Microsoft web development options have become overly complex over the years but if you want to work with such an implementation, ASP.NET Blazor Server-Side will keep you primarily in the PHP-like camp. Blazor is only supported by C#, which is not as difficult to learn as C\C++.
Finally, the messy route is the HTML, CSS, JavaScript combination. HTML is still the markup language that display's web pages, while CSS is still the standard for styling these pages. Both of these technologies are used in all of the technologies listed above.
It is JavaScript that the professional community has a love-it or hate-it relationship with. I am one of those who despises the language. To begin with, internally it is a mess and the standards committee has never seen fit to have it cleaned up. Next, its syntax has become increasingly arcane as anything you will come across in any of the major development languages. And if you use JavaScript components and frameworks, you are probably going to experience conflicts among them making your debugging life a nightmare. Finally, the language never feels as if it is a mature and stable environment. This is most likely a result of the fact that the language was never designed to support how it is being used today. As a result, vendors have created frameworks to make using the raw language easier such as Microsoft's, "TypeScript".
It is also highly insecure, like everything else that is focused on client-side development.
However, this combination of technology skills can all be learned at the W3 Schools mentioned above.
Right now, I am researching techniques for the development of a multi-user application based off of my current desktop application for document management. If I wasn't so invested in my Microsoft skills, I would definitely go with Pure PHP as it is an equivalent to the Classic ASP development we did on the Internet in the 1990s and early 2000s but now more powerful.
But being a Microsoft software engineer, I am currently looking at Blazor Server-Side to see how close I can make it to the original ASP.NET WebForms model, which to me was the zenith of web development environments due to its greater ease-of-learning and powerful capabilities.
ASP.NET WebForms is still available in the older, standard .NET Frameworks but is not recommended for new development.
ASP.NET Blazor Server-Side however, is getting rather close to a modern reincarnation of the WebForms model. However, more refinements are required for the implementation of pre-built components so that they are easier to use. Right now, working with some of them can drive a person to drink...
Steve Naidamast
Sr. Software Engineer
Black Falcon Software, Inc.
blackfalconsoftware@outlook.com
|
|
|
|
|
Thanks for the recommendations!
|
|
|
|
|
EDIT 2: I DID IT. THIS ROCKS
I'm not entirely sure how they do it. The code is a labyrinth - it has over 100 contributors.
Alpha blending without hardware acceleration is slow because you have to read from the source pixel by pixel, blend each pixel with the new color, and then write to the the pixel to the destination.
I was doing this naively, but I want to approach LVGL speed. I need to rise to the challenge.
I'm now planning on reading pixels line by line from the display source in batches, then scanning each line, and only reblending if the source color changes when I fill a rectangle.
Filling rectangles is a primitive that almost all drawing operations - including draw-line use so it basically speeds up everything.
To complicate things, I can't guarantee that memory will be available to copy the source line into a buffer, so if it's not I need to fallback to a pixel by pixel read.
Here goes nothing.
Wish me luck.
Edit: I sped it up some, but it's sped up much more when the draw destination supports direct reads into RAM. However, the display device I'm using this with reads the pixels in 18-bit format rather than 16-bit format and worse, it's padded to 24 bits. That means no matter what I do, I have to convert each pixel to 16-bit anyway. I kind of know how to speed it a little bit even in that case but nowhere near approaching that LVGL demo. It makes me wonder if they aren't doing some sort of faux alpha blending in that demo rather than the real thing. That wouldn't surprise me at all. I have an idea of how to do that anyway.
Real programmers use butterflies
modified 20-Feb-22 10:44am.
|
|
|
|
|
I believe you can do it...
Unless the original code was very good.. but with 100 contributors there is a lot of room for messy useless redundant slow code!
|
|
|
|
|
I did it man. I'm not sure how it stacks up against LVGL because I still think their demo probably cheats, but I sped it up by orders of magnitude in most situations (as long as it has enough memory to make a temporary bitmap to blend to)
Real programmers use butterflies
|
|
|
|
|
Niiiiiice!
|
|
|
|
|
I have a Wio Terminal. It's a little $40 IoT widget that has 192kB of SRAM in it.
It *also* ostensibly has 4MB of PSRAM.
However, there is no documentation on using this extra PSRAM.
The last post reply on their support forum was 2 years ago.
There are no samples, either from them, or from 3rd parties that I can find that use this PSRAM, anywhere on github or elsewhere for that matter.
What's the point of spending the money to have 4MB of RAM in your device if you're not going to take half an hour and at least produce a sample that uses it? Why spend the money?
It just floors me that people think they can release products without documentation, put up a few youtube videos and call it done.
Real programmers use butterflies
|
|
|
|
|
honey the codewitch wrote: What's the point of spending the money to have 4MB of RAM in your device Marketing.
honey the codewitch wrote: if you're not going to take half an hour and at least produce a sample that uses it? Idiot management.
|
|
|
|
|
You're not wrong! Grrr!
Real programmers use butterflies
|
|
|
|
|
David O'Neil wrote: Idiot management
Or because they couldn't. Many years ago I bought a small board (do not remember the seller name) for the ram and it wasn't even connected (there were no PCB traces going to it), it was just glued on some free space of their previous board revision
Easy money for them. Idiots us who bought it.
|
|
|
|
|
Option 1: The device becomes popular and people will start hacking away. Sooner or later someone will figure out how to access the PSRAM and you just saved the time it takes to write samples (which is more than half an hour - could easily be several hours)
Option 2: The device turns out not being very popular, and you just saved the investment in making examples.
Of course, one could argue that documentation is required (well, helpful) )for a device to become successful, but why make things complicated when the other options are easier for management to understand in a PowerPoint?
|
|
|
|
|
Well, the thing is a couple years old at least, so I believe I'm stuck with option 2.
Real programmers use butterflies
|
|
|
|
|
In the "supermini" days, I was working for a small company making a VAX competitor. The company didn't have the development resources to design different models. But the market demanded a "range" - entry-level alternatives, top-range alternatives. So the question came up: How to differentiate, when the core machine is identical in all the alternatives?
This was in the pre-RISC days. CISC CPUs were microcoded, and this machine loaded its microcode from the disk as part of the boot process. So one proposal that was seriously considered was to make an entry level model by inserting wait cycles in the microcode, hardware 100% identical to the higher models. It didn't end up that way, though. Cache memory was extremely expensive. Removing the cache saved about 40,000 Euro in component costs, roughly halving the CPU speed, so that alternative was chosen.
For the top range model, the machine was delivered in a twin cabinet, with lots of space for I/O cards (this was essential for lots of customers), and possibly small, internal disks. The CPU was identical in speed and functionality to the mid range model. I taught a course in programming these machines, and one of the participants got furious when I told that the top range model was no faster than the mid range model: She threatened to sue the company for fraud; they had spent the extra money for the top model to get the fastest CPU available, and it turns out to be a waste!
Another "one size fits all"-solution employed by this company: The machine had a hidden disk, not visible to the customer, containing the full suite of proprietary software. When a customer bought some software, it was distributed on a 360K floppy containing the license key, which was a decrypt key for copying the software from the hidden disk to the ordinary working disk. (This obviously was before the internet, so the alternative would have been to ship the software on 42 floppies.)
So, my guess is that the extra RAM may be there for some other use of the same design; the manufacturer maintains a single design, a single production line. Maybe the other use is a completely different product. Maybe there was a planned product never making it to the market, that would be using this RAM.
Hardware sometimes is like software: I am certain that at least 50%, but most likely 80-90%, of the Microsoft Office code has never been executed on my PC and never will. But MS won't make a special MSO edition for me, with only the functions I use. You have a piece of hardware with components your Wio Terminal does not use. Fair enough - maybe someone else uses it. Reusable hardware design - reusable software design; that is two sides of the same coin.
|
|
|
|
|
trønderen wrote: my guess is that the extra RAM may be there for some other use
Honestly I doubt it if only because they advertise it prominently. I think it's far more likely that they just suck at documenting their products. I've run into a lot of IoT boards like that. Some I'm even had to throw away.
Real programmers use butterflies
|
|
|
|
|
You haven't requested/obtained the information from the manufacturer on how to use this RAM. So you think the RAM shouldn't be there.
I fully recognize your opinion that the RAM shouldn't be there. That doesn't imply that I agree with you.
|
|
|
|
|
I have indeed, and I've exhausted every available contact avenue I have had with them.
I don't think it shouldn't be there. I think it should have been documented.
Real programmers use butterflies
|
|
|
|
|
My theory:
The memory is faulty or otherwise unusable. Sticking it on a board and sending it out to customers is cheaper than disposing of it according to environmental regulations.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|