The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
I've been running Ubuntu exclusively on my home computer for at least 6 months now.
I have dual boot but I never boot into Win10 any more.
During the Stay-At-Home order I've been WFH (work from home) for over 5 weeks now.
I use Remmina[^] to remote into my work computers (Win10) from my Linux machine -- Remmina actually works better than mstsc (RDP).
Dot Net Core
Anyways, I have had .NET Core on the machine for a while and Visual Studio Code, but yesterday I started testing out functionality by writing little command-line programs.
Visual Studio Code allows you to step through code (with the proper plugin).
You can amazing things on Linux that is totally based upon your C# skills. Really cool!
Feels Like K&R C
Also, if you ever got a chance to do some old C programming and go thru the K&R C book I believe you will find this Dot Net Core programming world really cool.
Yes, it's console-based and you have to learn dotnet commands for building and running but it is just so much fun and you can really learn about the OS by writing these little programs.
Let me give you some examples:
The Generic collections System.Collections.Generic are all there.
You can do Linq (System.Linq)
You can do crypto and hashing (like SHA-256) using System.Security.Cryptography.
You can do all the IO (files, folders, etc) stuff with System.IO. System.Diagnostics to get Process info is there.
And the System.Environment stuff is all there (ie How long has OS been running, what is machine name, and so much more).
These are just the ones I've tried there is much more.
Of course, System.Windows.Formsdoes not work at this time, but think of this as K&R C and you'll be happy. I'm happy. Linux Quite Different From Windows, But Same Under C#
Think about that. The two environments (windows & linux) are quite different but all of these things work via C#. It is amazing. I'm so infatuated with this right now.
Visual Studio Code and Dot Net Core are so cool.
EDIT - More libraries tested
Just came back to say that I've now tried: System.Net - HTTP web communication, retrieve web pages, etc works great. System.Threading - Thread stuff works over here on Linux. Maybe we all expect this but it is quite amazing I'm writing C# on Linux and running it. Maybe more amazing to those of us who dreamed of Mono being a real thing for a long time.
Also System.Timers - implemented a system timer and it works great. I'm still infatuated, people.
It's so cool to pull code from my LINQPad scripts, paste them over to Linux and run them. Yippee-ki-yeah!
Glad to hear I'm not the only one playing with net core on Linux - I've written an ASP.NetCore api service ( in C# ) that I host as a daemon on an SBC ( FriendlyElec Nanopi M4 V2 running Armbian bionic beaver ) that queries a postgresql database and does some pretty serious number crunching and it works like a dream. I have a love affair with SBC's and with the advent of Net Core I can use them to prototype real world scenarios ( I've built my boards with terrabyte SSD M2 drives and 4GB ram ) they have a RK3399 ARM64 hexacore processor and the little beauties fly - did I mention I love net core ? Next step is to containerize this with Docker Nginx is also in the mix as a reverse proxy ( all this on a SBC ) Big thanks to folks on here who helped me with the Web side of things ( you know who you are )
"We can't stop here - this is bat country" - Hunter S Thompson - RIP
That sounds like some very cool stuff. Thanks for sharing.
I'm glad I'm not the only one who is excited about dotnet core and Linux, too.
I need to learn a bit more about nginx and the container stuff is definitely in the future.
You can save a bit of money* running a small container running linux (like I do on DigitalOcean) but you can run C#-based WebAPI or C#-based (MVC) web site or whatever that runs out of the container. Very cool stuff.
*since linux hosting is always less expensive.
I've never published to a hosting site as most of my work has been backend ( we host our own site and services ) and I never got involved with the rest - I'd like to push what I've built to a public hosting company just for the hell of it - can you give me any hints as to how one goes about doing this ?
"We can't stop here - this is bat country" - Hunter S Thompson - RIP
Now, there are some more advanced things you'll need to learn like exposing a ASP.NET Core MVC to the outside world (via nginx) and that will be more difficult. But if you have console based apps they will run with the first 4 steps above.
Let me know your results. I'm very interested.
I've been using Visual Studio Code and Dot Net Core (now 3.1) on a Mac for about 3 years, since my only Windows machine back then wasn't handling life well at all, and I really like the eco-system (I started off data mining files produced with ISP outage data using LINQ, Regexes etc so I could tell my ISP in no uncertain terms, there were problems) ..
I've since expanded my concept of a Command-Line program with the same eco-system, but adding Dependency Injection/IoC with Microsoft.Extensions.DependencyInjection, which means logging is 'easier', and I've been going to the 'async' side where possible as well.
Of course, System.Windows.Formsdoes not work at this time
Until we can run GUI apps (WPF and Winforms) in Linux, I won't consider .Net Core to be "cross-platform".
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
The root of the issue is that Microsoft has a tendency to delete web pages documenting their sins. I live in the embedded world where things never die. So, in my support documentation, I cannot reference web links - I embed captured PDF files. What I have discovered is that printing a web page inevitably leads to text truncation, something I cannot tolerate.
For example, if I use FireFox or Chrome to "print" to pdf - chunks of text are missing. It's hit or miss. Using IE (I mean MS made it right?), I still get the same result. I saved the entire page - IE won't load it. Seems very random.
<italic>Stuck in a dysfunctional matrix from which I must escape...
"Where liberty dwells, there is my country." B. Franklin, 1783
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
if I use FireFox or Chrome to "print" to pdf - chunks of text are missing. It's hit or miss. Using IE (I mean MS made it right?), I still get the same result. I saved the entire page - IE won't load it. Seems very random.
FireFox or Chrome print to PDF -- I have had good luck with it.
However, I have a guess as to what is happening. Print To PDF issue
If the page you are attempting to print to PDF is :
1. generated by some process on the web site - where the site adds divs, or table rows, etc, then those items may only be in the in-memory DOM and then when you print to PDF those items are missing. This is actually a problem with the web site itself.
On IE Issue
On the other one where you are saying "saved the entire page" again I think you are doing a Save Page As... which will download the page from the web site, but if there are generated sections of the HTML (DOM) then the web site itself may not generate those upon Save As...
I've found printing web pages is generally a good experience these days - much better than in the past. Print to PDF works well most of the time. You can also try Foxit Reader print driver; free, full-functioned and also works well.
MS Edge fairs better than Firefox and IE combined; on Windows 10. But the whole idea of "print" when it comes to webpages is hit or miss especially when targeting the spooler. I won't ask the obvious question, being the judge and all, but does anybody know where one can ask a question about peripheral printing and squeezing sheets of paper through a system of rollers past hot stuff ... here on CP.
I never "print" to the spooler. Always print to one of a few .pdf engendering applications dedicated to making life simple ... ER.
I was going to be a smartass and point out that PrintScreen has never introduced any formatting error for me, but obviously that's only good for a page that fits entirely on a single screen. This looks like a smarter solution if it can do the whole page even when it requires scrolling.
OTOH, you still end up with an image and lose all context.
Chrome's built-in dev tools can do some decent things that do keep the DOM elements - they really should leverage that to help customize printing. Of course it'd have to be called expert mode printing or some-such...
but obviously that's only good for a page that fits entirely on a single screen
As if you couldn't open a word file, do the margins to the minimum, paste the picture and go for the next screenshot... repeat until web page is ready to get printed as a whole
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
I used to get this often in firefox, until I found out a best kept secret.
Firefox has 2 built in print systems.
If you press ctrl+p like everyone does you get the standard, rubbish text truncation print system.
If however, you click on the hamburger menu top right, and find the print option, you get the second print provider that allows you access to all sorts of formatting options, and which does a better job too, esp when printing to a virtual off printer.
I have been fighting to get decent printouts for years - and given up. Today, I rather save the complete web page, as an html file, with an associated subdirectory for images and style sheets and whatever. Most browsers provide this as a menu or control char command and takes care of everything.
A small disadvantage is, if you save hundreds of these pages, is that you end up with hundreds of copies of the same icons, images, common script snippets etc, one per saved page. But disk is cheap nowadays; it is no really big issue.
My experience is that this works a lot better than making PDF files for printing.
10-15 years ago, there was a whole crowd of "web harvesters" that allowed you to download an entire web site. They would keep the URL structure as a directory, so that e.g. icons and images were stored only once, for a much cleaner structure, if you want offline access to an entire website, or a major part. Fifteen years ago, there were still a few webpages here and there with more or less static, plain text/graphics info, so it used to work quite well. Nowadays, when 99% of the web pages are built on-the-spot for each request, and much of the information presented is retrieved from a remote database as you move around in the page, the harvesters (crawlers, scrapers, ... lots of names are in use) are not as useful as they used to be. Googling for e.g. "web harvesting" gives you enough links to keep you busy until the pandemic is over