|
Peter_in_2780 wrote: hop across to Nautilus
What's nautilus ? Google says mollusc, but I fail to see how this is related to your passwords.
|
|
|
|
|
The Ubuntu (Linux) equivalent of Windoze Explorer. aka Files.
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
While Explorer has an equivalent (right-click on a file, Properties, Previous Versions) - I've never seen it report anything but "there are no previous versions available".
I've always wondered if this was related to Storage Spaces (which I've never used, so that would explain why I've never seen it offer a list of previous versions), but AFAIK Storage Spaces is a Win10 feature only, and that Previous Versions tab is there in 7 also (I just checked - it might also be there for even older versions for all I know).
If it's a System Restore thing, then it guess it's as useless as System Restore has ever been to me - meaning, I've never seen a single instance where it's ever did anything useful.
|
|
|
|
|
Hey all. I put this in the lounge section because I don’t think it’s a question that can just be solved. I want to see what you Think about micro services versus monolith architecture for a your own personal projects.
Obviously, there are the generic answers : micro services are great for big expanding websites with a lot of manpower. But, if your entire code base is in one place, it’s easier to deploy, test, and (arguably) manage for a single person.
The reason I’m asking this is because I’m a pretty new programmer. I have a hard time organizing my code when it gets larger. So if I was to use a micro service architecture, I would have built-in organization of services.
Hypothetically, if this application actually becomes something I can make money off of, it would be a lot easier to expand if I can get one guy to work on the users service and so on.
Plus - this would allow put different back-end languages together in a single project.
On the other hand, since it’s just me, that seems daunting to have 7+ different small apps running at once for a personal 1-person-made project.
It might even effect the cost to run it.
The additional complexity for simple tasks turns it into kind of a nightmare. Right now I’m looking at security - some services have both public and private endpoints, so I will have to find a way to either send the authorization across services or to tell the gateway service about all my endpoints, thus defeating the purpose.
So. What do you all think?
|
|
|
|
|
Go with what is easier to develop and maintain.
Ease of maintenance is always a winner in the long term!
In fact it also win in the short term, easy to maintain code see its bugs be fixed faster!
|
|
|
|
|
Interesting question!
At a bare minimum, I would put your services into separate DLL's and write interfaces for your "exposed" classes, and use those interfaces everywhere else. Mark your classes internal so you don't accidentally use the concrete class instead of the interface. Use factory methods for singletons and some other static public method for creating instances if needed. If you want to go more microservice later on, this is a big step in the right direction.
A slightly more sophisticated step is to do the same thing but use a simple dynamic loader where you describe the services, as modules, that your main app wants. The only assembly that needs to be shared is the assembly that defines the interfaces. I usually take this approach as I can customize the app for what services (dll's) I need and easily replace a service (dll) with something else, say a stub, if it's not implemented. The core process to the implementation I use is described here[^]. See parts II and III for additional bells and whistles.
BTW, the disadvantage with the above is that you have to implement a post-build copy to copy the DLL's to your app's bin/debug (and release) folder unless you implement a more sophisticated assembly resolver. I cheat by simply referencing the service DLL's in my application, which pulls them in.
Alternatively, you could look at one of the dependency injection frameworks. I personally dislike DI (though .NET Core does a nice job of it) mainly because some of the DI frameworks I've worked with a long time ago add a ton of krufty garbage that makes debugging a nightmare.
If you really want to go nutso, implement each service on a $35 rPi and have them talk to each other over HTTP.
If you want to be really far out (as if the rPi idea isn't) I propose that microservices is going to be a dead idea at some point. Microservices are based on a "call this service to have it do something" concept. Consider instead an agent-based implementation. Agents are lightweight just like microservices, but instead they sit around waiting for something to work on. This means implementing some kind of a "data bus" where you publish your data and any agent interested in that data does whatever it does and publishes the result back onto the bus. Highly asynchronous, highly extensible, highly distributable, and very autonomous. That, in my warped opinion, is what will eventually replace microservices. Because you see, while microservices solve the monolithic architecture issue, they don't solve the monolithic workflow issue. Agents do.
So that's my 2c wisdom.
[edit]
Member 14138886 wrote: so I will have to find a way to either send the authorization across services or to tell the gateway service about all my endpoints, thus defeating the purpose.
This may or may not make any sense, but your gateway should authenticate whatever needs to be authenticated. Everything else should be hidden from the public Internet -- any communication between the gateway and the services, or between the services themselves, should be private and assumed to be vetted by the gateway.
You don't have to tell your gateway about all your endpoints, or even your routes. Sounds like you just need a generic gateway that does authorization/authentication and if auth'd, passes the request on to your internal services. You might need some sort of routing table, but I'd flip that around -- have each service, say running in it's own IP/port, Docker container, separate machine, whatever, tell the gateway "hey, these are the routes I'm interested in, these are public, these require auth'ing" -- the gateway then builds the routing table programmatically. Particularly useful if some piece of hardware dies and you need to replace the service it handles, for example.
Though this is beginning to sound a lot like a reverse proxy.
[/edit]
Latest Article - Slack-Chatting with you rPi
Learning to code with python is like learning to swim with those little arm floaties. It gives you undeserved confidence and will eventually drown you. - DangerBunny
Artificial intelligence is the only remedy for natural stupidity. - CDP1802
modified 3-Feb-19 18:42pm.
|
|
|
|
|
Marc Clifton wrote: If you want to be really far out (as if the rPi idea isn't) I propose that microservices is going to be a dead idea at some point. Microservices are based on a "call this service to have it do something" concept. Consider instead an agent-based implementation. Agents are lightweight just like microservices, but instead they sit around waiting for something to work on. This means implementing some kind of a "data bus" where you publish your data and any agent interested in that data does whatever it does and publishes the result back onto the bus. Highly asynchronous, highly extensible, highly distributable, and very autonomous. That, in my warped opinion, is what will eventually replace microservices. Because you see, while microservices solve the monolithic architecture issue, they don't solve the monolithic workflow issue. Agents do.
I would look at doing something like this [if I understood it correctly], using some sort of messaging queues [see RabbitMQ and similar] where work that needs doing can be dropped on the queues and processed by one or more instances of an an agent. It does mean that individual services can be taken offline, updated and redeployed, or replaced without losing any work requests.
|
|
|
|
|
|
Without intending to denigrate anything said here, there are two guiding principles that have stood the test of time for me in over 40 years of software development: KISS and OCCAM's Razor. Simply expressed they are:
Keep It Simple, Stupid! and the simplest solution to a problem is likely to be the most effective (let alone less work and require less maintenance).
It is so easy - BTDTGTTS! - to get absorbed in the latest frameworks, wizz-bang IDEs, cloud services, paradigms, Agile processes, TDD, new languages, containers, functional programming etc etc etc, that you can lose site of the fact that creating a whole software ecosystem with thousands of lines of code using hordes of complex tools simply to write "Hello World!" onto the screen is perhaps not the right approach!
8)
|
|
|
|
|
Mike Winiberg wrote: there are two guiding principles that have stood the test of time for me in over 40 years of software development: KISS and OCCAM's Razor.
I totally agree, which is why I went rogue years ago and still cannot stomach looking at code written with one of those heavyweight frameworks. And nowadays, you have to deal with multiple frameworks: ASP.NET (and flavors) on the back end, Angular (and flavors) on the front end, with yet another layer of obfuscation as well (currently dealing with ExtJs). And any value that these add ends up being completely lost in all the kruft that is required, IMO. Belch.
Latest Article - Slack-Chatting with you rPi
Learning to code with python is like learning to swim with those little arm floaties. It gives you undeserved confidence and will eventually drown you. - DangerBunny
Artificial intelligence is the only remedy for natural stupidity. - CDP1802
|
|
|
|
|
I don't understand the question. And I don't do any of that Web crap. I mainly do back-end and database work.
For a previous employer I developed a suite of Windows Services that did a bunch of different things on the server -- each in its own DLL.
Still, the general rule is to use the right tool for the particular job. So trying to learn how to use only one tool is a fool's errand.
|
|
|
|
|
If you are new to programming i suggest to go for Monolith, then organize your code using like Onion architecture/Clean architecture. After that, you will take-out a module and make into a new service.
You can then play around with Istio, it's a managed service mesh. You can deploy it in Kubernetes or individual virtual machine.
Caveat, i just started playing around with Kubernetes. And i think Istio will surely help in managing your microservices with load balancing, service-to-service authentication, monitoring, etc. I will look into Istio soon.
[Signature space for sale]
|
|
|
|
|
Bare in mind that there are endless options of separation between monolithic and micro...
Do your design work. Take in count not only the complexity, but also the work-force (knowledge)...
And totally ignore fashion...
"The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge". Stephen Hawking, 1942- 2018
|
|
|
|
|
|
Having done this myself, and feeling the pain right now, I'd still say go monolith first.
Ultimately, any project is constrained by something.
Your constrains are
Time/Man power
Money
and right now, you're not worried about 1000's of users.
If you come up with a good proof of concept app, use it yourself and love it, then you know you're on to something.
next still will be making it look better. <-- many man hours here
then try to sell it. <-- many man hours here
Then, only when you get to say 100 users, you MIGHT have a scale problem (I'm guessing as I don't know your system/solution but you understand what I mean)
Now, you've sold it 100 times/subscriptions.
you can make real decisions.
worst case
you double up all your hardware to sell to another 100.
pay other people (buy time) to re-write the app into a more scaleable solution.
I'll be happy to discuss your idea/plan in private with you. I'm right in the middle of this myself. So would be good to bounce ideas. Direct message.
Good luck.
|
|
|
|
|
If you can't handle monoliths you probably can't handle microservices.
Both require you to write loosely coupled code, except microservices are harder to maintain, debug and refactor.
I'm currently working on an application that's a monolith, except for a little piece of code that serves as a webhook, which I've put in an Azure Function.
It's like a monolith with a microservice.
If I find more functionality like that I'll put it in another Function, but for most of the work a monolith works fine (no scaling requirements, I'm the only developer, etc.).
By the way, if you're new you should probably focus on learning other things first, like proper database handling, DI, SOLID, SoC...
On the other hand, if you have experience with those and you know how monolith do and don't work it will be cool to use and learn about microservices.
|
|
|
|
|
The reason I’m asking this is because I’m a pretty new programmer. I have a hard time organizing my code when it gets larger. So if I was to use a micro service architecture, I would have built-in organization of services.
And here is the rub - making microservices won't organize your code. It will push you to break it up, but you are quite likely not going to divide it correctly the first time. I would suggest keeping as much as possible in a single project. Then it will be easier to move things as you better understand things.
|
|
|
|
|
Why would you consider enterprise level practices for a single user application or one which is being designed by an individual?
You do not need micro-services to organize an applications coding structure. You simply need a clean structure that is acceptable to the style of your own development. If you need specific services for your endeavor than there are plenty of third-party libraries available for this.
Also, this is what Object Oriented Programming does quite well with, though many of its other touted benefits have become questionable over the years (ie: re-usability).
The other issue with much of current programming practices is that everything has to go on the Web? Why? For small usage situations or even larger departmental ones, client-server designs are still the most efficient designs available and are much less complex to design and implement.
The implementation of a Web application should only be considered when large numbers of users are expected across multiple domains within a company or if the implementation is to be publicly available...
Steve Naidamast
Sr. Software Engineer
Black Falcon Software, Inc.
blackfalconsoftware@outlook.com
|
|
|
|
|
Have done both.
My experience made my answer very clear - don't distribute.
I would suggest you to read this - no one can say it better than him - the founder of Ruby on Rails and Basecamp:
The Majestic Monolith
|
|
|
|
|
Personal project?
New programmer?
For sure do what you haven't done before (and fail). You gotta learn what's good and bad for you first hand.
|
|
|
|
|
https://arstechnica.com/information-technology/2019/02/digital-exchange-loses-137-million-as-founder-takes-passwords-to-the-grave/
no backups...
Caveat Emptor.
"Progress doesn't come from early risers – progress is made by lazy men looking for easier ways to do things." Lazarus Long
|
|
|
|
|
Can't they just cut off his finger and use his fingerprint to log in to his laptop?
> The debacle should be unthinkable for any financial institution, but sadly it’s just one of many similar issues to hit a cryptocurrency exchange in recent years.
Latest Article - Slack-Chatting with you rPi
Learning to code with python is like learning to swim with those little arm floaties. It gives you undeserved confidence and will eventually drown you. - DangerBunny
Artificial intelligence is the only remedy for natural stupidity. - CDP1802
|
|
|
|
|
Security procedures supplied by QA questioners, I assume ...
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
|
Okay, I'm old enough to get that one.
|
|
|
|
|