|
charlieg wrote: Are you from Stack Overflow
Since he did not ask you if you to first go get a CS degree before posting questions, he is probably not.
|
|
|
|
|
lol, true. I'm an EE, written lots of code but I've always wanted to take an algorithms class. Don't know why, well, at least my analyst asks me.
Charlie Gilley
<italic>Stuck in a dysfunctional matrix from which I must escape...
"Where liberty dwells, there is my country." B. Franklin, 1783
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
|
|
|
|
|
I'm an EEE myself, and written a whole bunch of algorithms, but I've never seen an algorithms class. What would that teach? Some of my engineering textbooks included digital implementations of the math they taught. That sort of thing? Some of the ones I came up with in my youth actually got published as company standards - like my classic algorithm for generating code to filter out ambient noise in a factory environment at run time. It drove the QA types nuts, because every time the test application ran, it was different code executing; they hate that stuff. Today, I don't know of any language that gives an application to modify itself while it's running, so I could never duplicate that one. But it was fun!
Will Rogers never met me.
|
|
|
|
|
My initial reaction is to agree, but the term is nonetheless useful.
|
|
|
|
|
The embedded world lives mostly in C, and the most versatile environment I know of has deep reusability achieved through an insane amount of defines, to the point that finding references to specific symbols require often 4-5 indirections. I'm talking about AutoSAR specifically.
It really depends on where you sit: are you developing a platform or a product? If you work on the final product, keeping things flexible may actually hinder the developement, as either you spend a lot of time trying to perfectly model the product you're developing in order to have flexibility where needed or you end up applying a flexible model which is not adequate to the specific product and end up breaking the model to make it work.
If you're working on a platform, that's different. The final product will be, hopefully, shared between hundreds of completely different systems so the expense in finding the perfect model and maintaining it pays off.
GCS d--(d+) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
Product, and you just hammered my nail.
Charlie Gilley
<italic>Stuck in a dysfunctional matrix from which I must escape...
"Where liberty dwells, there is my country." B. Franklin, 1783
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
|
|
|
|
|
den2k88 wrote: AutoSAR
Someone else on CP who knows Autosar !
|
|
|
|
|
Got thrown into it about a year and a half ago. It was bound to happen, working in a company that has 80% automotive contracts.
GCS d--(d+) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
I work in the company that kind of created it, so avoiding it completely is not possible.
|
|
|
|
|
LOL so you probably know a friend of mine, he moved in Germany several years ago and he works there too
GCS d--(d+) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
Everybody in Germany knows someone who works there
|
|
|
|
|
I was almost thrown in a project where AutoSar was involved.
Luckily for me, another project started to burn down and I was sent there to extiguish the fire in the last moment and I could remain in my blessed ignorance.
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
Inheritance is very useful: create a "base class" (abstract by preference) and derive the classes you actually use from that. I have base forms, usercontrols, and general classes and it can really save you from some mistakes - like forgetting to update one out of five classes to fix a bug. If it's part of the base class, you fix it in one place and it updates all the others.
Admittedly, it's been a long while since I did any embedded work (sob!) but if you can cope with the performance hit that the higher level language brings with it then the improvement in reliability and maintenance effort is well worth it.
I generally couldn't - I had low frequency processors and restricted RAM, plus I had to run 24/365 so heap allocation at run time was pretty much a no-no.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
I did use it in a sortofembedded system that had to control several types of hardware, which could be of different makes and models with different communication systems. So the code would instance x generators, x sensors and use them through the generic interface, while the derived classes managed the gritty details of every model.
GCS d--(d+) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
Very similar to my situation, but in my case I have to produce a visual representation of said data (that varies wildly) and must be pushed to different display formats. I found the time to produce a class that could be inherited as not worth the effort. If the code never changes (note my comment - 10 years), why bother with the investment?
I like the idea of just being able to inherit from a base class, but it happens to rarely.....
Charlie Gilley
<italic>Stuck in a dysfunctional matrix from which I must escape...
"Where liberty dwells, there is my country." B. Franklin, 1783
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
|
|
|
|
|
Firmware developement follow different rules than software, there's no circling around it. Software should not depend on the underlying implementation details, firmware is the underlying implementation and it's all about details.
Some things should be kept as agnostic as possible, e.g. the main state machine should not depend on the exact make of the various hardware components so interfaces to control hardware should be generic (i.e. peripheral_On, peripheral_Off, peripheral_Sleep, peripheral_Send...) but all the rest can not. One component may be turned on/off via the combination of two pins while another, identical on every aspect, may require timed pulses on a single pin and follow a protocol based on several outputs.
GCS d--(d+) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
Yeah. Reality bites.
At some point we try to abstract away all the hardware bit stuffing, but it's still there ruining all the nice plans - all it needs is an auto resetting status bit when other bits in the byte are read to ruin everything.
|
|
|
|
|
heap allocation at run-time is something brutally beat into me. actually beat out of me.
Charlie Gilley
<italic>Stuck in a dysfunctional matrix from which I must escape...
"Where liberty dwells, there is my country." B. Franklin, 1783
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
|
|
|
|
|
My theory is that there are very few examples around that bring OOP all together.
OOPs main purpose should be reuse and making the code that you write smaller (less code to deal with when adding enhancements or fixing bugs).
I believe that with one very focused example you would see all of PIE-A (Polymorphism, Inheritance, Encapsulation and Abstraction) come together.
But most of the time you don't need this type of architecture until things get large. And, most projects don't get large -- especially samples you see.
Here's My Attempt
This should be an article but I'll do that later.
The Entire Premise
Imagine you want to save data to three different data stores:
1) file
2) database
3) web location
Save instantly becomes our main verb(functionality).
Requirements: We Want Four Things
1. Any dev must be able to include the Save() functionality on their class in the future. (interface)
2. Any dev must be able to call the Save() functionality on any class in the future and easily know that it is named Save() -- this is self-documenting code
3. There must be an easy way for dev to configure where the data will be stored (file, db, url)
4. A dev must be able to create a list of various types (classes in the architecture) and iterate through them, calling Save() and knowing that they will save to their appropriate destination. This is Polymorphism -- all objects implement the Interface which provides Save().
Here is the smallest sample I can come up with and it really works.
Get LINQPad - The .NET Programmer's Playground[^] and run the code below.
You will see the following output:
I'm saving into a FILE : super.txt
I'm saving into a FILE : extra.txt
I'm saving into a DATABASE : connection=superdb;integrated security=true
I'm saving into a WEB LOCATION : http:
I'm saving into a FILE : super.txt
I'm saving into a FILE : extra.txt
I'm saving into a DATABASE : connection=superdb;integrated security=true
I'm saving into a WEB LOCATION : http:
Now a dev can
1. create an IPersistable object.
2. pass in an IConfigurable object (which determines which data store the Save() will write to
3. call Save() on the object
Dev Only Needs To Know Two Things: Abstraction
1. create a configurable object -- select which data store
2. call Save()
void Main()
{
List<IPersistable<IConfigurable>> allItems = new List<IPersistable<IConfigurable>>();
FileConfig fc = new FileConfig("super.txt");
IPersistable<IConfigurable> item = new FileSaver(fc);
List<IPersistable<IConfigurable>> fakeData = new List<IPersistable<IConfigurable>>();
fakeData.Add(new FileSaver(new FileConfig("extra.txt")));
fakeData.Add(new DatabaseSaver(new DatabaseConfig("connection=superdb;integrated security=true")));
fakeData.Add(new TcpSaver(new TcpConfig("http://test.com/saveData?")));
allItems.Add(item);
foreach (IPersistable<IConfigurable> ic in fakeData){
allItems.Add(ic);
}
foreach (IPersistable<IConfigurable> ip in allItems)
{
ip.Save();
}
foreach (var ip in allItems)
{
ip.Save();
}
}
interface IPersistable<T> where T : IConfigurable {
bool Save();
}
interface IConfigurable {
}
class FileSaver : IPersistable<IConfigurable>{
protected FileConfig config;
public FileSaver(FileConfig config){
this.config = config;
}
virtual public bool Save(){
Console.WriteLine(String.Format("I'm saving into a FILE : {0}",config.FileName));
return true;
}
}
class FileConfig : IConfigurable{
public string FileName{get;set;}
public FileConfig(String fileName=null){
FileName = fileName;
}
}
class DatabaseConfig : IConfigurable{
public string ConnectionString{get;set;}
public DatabaseConfig(String ConnectionString=null){
this.ConnectionString = ConnectionString;
}
}
class TcpConfig : IConfigurable{
public string Uri {get;set;}
public TcpConfig(String uri){
Uri = uri;
}
}
class DatabaseSaver : IPersistable<IConfigurable>{
protected DatabaseConfig config;
public DatabaseSaver(DatabaseConfig config){
this.config = config;
}
public bool Save(){
Console.WriteLine(String.Format("I'm saving into a DATABASE : {0}", config.ConnectionString));
return true;
}
}
class TcpSaver : IPersistable<IConfigurable>{
TcpConfig config;
public TcpSaver(TcpConfig config){
this.config = config;
}
public bool Save(){
Console.WriteLine(String.Format("I'm saving into a WEB LOCATION : {0}", config.Uri));
return true;
}
}
|
|
|
|
|
I also worked in what you could call the embedded world, though not so close to the hardware, and soft rather than hard real-time. An OO rewrite saved the product I was working on, and it's still seeing development over 20 years later. We used all three (encapsulation, inheritance, and polymorphism) extensively.
|
|
|
|
|
The farther you get from the hardware the more generalized approach are useful / a necessity. I worked in a product with similar specifications ("not so close to the hardware, and soft rather than hard real-time.") and OOP was a huge benefit, when we started adopting it there has been a significant improvement in quality, developement time, customization time and stability.
GCS d--(d+) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
You are quite correct, IMO. First, forget about the promise of re-use when it comes to objects. Nobody ever does anything in the real world twice exactly the same way to merit any re-use benefit.
In order of importance, to me:
1. Encapsulation - keeps stuff organized
2. Interfaces - defines what the class is expected to implement. I also use empty interfaces simply to indicate that the class supports some other behavior. Could use attributes for that as well, but interfaces are sometimes more convenient when dealing with a collection of classes that all support the same thing and there are methods that operate on that, hence I can pass in "IAuditable", for example.
3. Inheritance/Abstraction - mostly useless, but there are times when I want to pull out common properties among a set of logical classes. Note that I don't consider this to be true abstraction, it's using inheritance to defined common properties and behaviors.
4. Polymorphism - useful, but less so now with optional default parameters that do the work of what one often used polymorphic methods for.
IMO, the reality of "how useful is OO" falls quite short of the promise of OO.
|
|
|
|
|
I'd agree. Sounds like function interfaces with better, purposeful, naming of the why, rather than a focus on the what & how.
|
|
|
|
|
I tend to take the view that isolation is a good target. Code isolated from other code is both maintainable and reusable, regardless of whether it is via an OO design or not.
Reuse is limited when using any OO language, because then any program that wants to reuse that code has to use the same language. If you're going after reuse, you have to give up OO because they are mutually exclusive.
All the re-used code are written in a non-OO language. Sqlite, for example, is one of the most widely deployed pieces of code in the world. All media format readers, as well, are widely deployed and non-OO.
If you write something really new and novel that does not exist (a new image format, a new protocol, new encryption algo, new compression format, interface to any of the above, or interface to existing daemons (RDBMSs, etc)), and you write it in Java or C#, the only way that it can become popular is if someone re-implements it in C so that Python, C, C++, Java, C#, Delphi, Lazarus, Perl, Rust, Go, Lisps, Php, Ruby, Tcl (and more) programs can use it.
The upside of producing library files (.so or .dll) that can be used by any language is that the result is also quite isolated and loosely coupled from anything else:
- It can be be easily extended by anyone, but not easily enhanced.
- It can be easily swapped out and replaced with a different implementation without needing the programs using that library to be recompiled, redeployed or changed in any way.
- Because it is a library, it will only be for a single type of task (no one would even think of putting unrelated functionality into a compression library, but I've seen devs happily put in unrelated stuff into a compression class).
Ironically, you can more easily achieve SOLID principles writing plain C libraries (.so or .dll) than you can with actual OO languages, because of the limitations of the call interface in dynamic libraries.
|
|
|
|
|
fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general?
Well, if it has not changed in 10 years I would say it is general enough
I am in sort of the same place as you. Mostly embedded development, and whenever I tried using OO I mostly failed. Usually because I decide to make a class for something that will only have one object instance.
|
|
|
|
|