The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
I spent an hour fighting with XmlSerialisers to try and get my object mapped to a schema. Changing names, trying to get attributes setup, dealing with CDATA. I gave up. I got so fed up I simply wrote the XML directly as a raw string. If I could have kicked it I would have kicked it.
I totally get the beauty of having data in a class and throwing it at different serialisers and having it Just Work. Switch between XML and Json and maybe binary and text and build out this whole massive ecosystem that screams "I'm trying to do too much!".
But dear lord. It's like root canal surgery.
Is anyone actively using XML as a data transport format? I get that we all use it in things like XAML and ASP.NET pages and the whole HTML thing, but as something that is not seen or edited by humans, that needs to be cognizant of bandwidth, is it still being used in that manner or am I just really, really intolerant this morning?
Mostly because I'm not fond of XML ... come to think of it, I'm not fond of HTML either.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
I've found that .NET XML serialization works fine and is relatively simple as long as .NET is doing the round-tripping. I've occasionally had to write my own serializers, but that's usually fairly trivial.
Adapting it to an existing schema or otherwise-specified form is a PITA. Instead of being able to say "handle this in XML", you essentially have to write code that implements the schema. This of course sucks, because the schema changes all the time (trust me, it's in the rules). You only get basic parsing out of the .NET XML support if you go this route.
Some guy somewhere had a bad dream and woke up with the idea: now how can I make something totally confusing and complicated which computers can read effortlessly but humans will find totally incomprehensible? He came up with XML and ticked all the necessary boxes/requirements perfectly.
It was designed by a committee and shows it. When I use XML I try and use attributes to store the data, it makes the payload much smaller and I avoid nesting if possible. LinQ is good with XML and XDocuments.
I never had to suffer through this kind of pain. I mostly worked on a system that was originally developed when memory and CPU time couldn't be frittered away on messages the size of .jpgs. Our proprietary language prefixed pack(n) to the type to control a field's width, and it was easy to predict how it would lay out a struct. Developing everything in the same language and standardizing on endianism made it possible to read/write structs directly from/to messages that used TLV encoding (type=parameter id, length=bytes, value=struct, nested if necessary). Very efficient, and no serialization or deserialization.
But processors were upgraded independently, so an interprocessor protocol had to remain backward compatible. In rare cases, this meant that an adapter in release n+k had to convert a message received from a processor still running release n.
Is anyone actively using XML as a data transport format?
Gosh yes, in a number of ways:
0: storing encrypted ftp credentials for desktop apps (yep, right out in the open on a web server) they get changed annually or as needed.
1: advertising program updates for desktop apps
2: uploading client data from desktop apps to a ftp/web server for processing, then maybe reporting
3: downloading client data from web apps to desktop apps (sql server 'for xml' is great) the xml loads directly into a datatable...life is good.
Of course, it helps when you control both ends (creation and consumption) of the xml content.
"Go forth into the source" - Neal Morse
"Hope is contagious"
I see XML (and even more: HTML) as a binary format. Not suitable for human consumption.
XML is OK for something generated as a binary serialization, untouched by human hands. Evaluated as such, XML qualities are mediocre. I cannot imagine any binary Tag-Length-Value format worse than XML, by any standard. XML has a single force: You can edit it using vi! (Or for that sake: cat 0 > filename )
Not too long ago, I asked (in General Programming) for reactions to my proposal of have a friend of mine provide a lot of tabular data as Excel tables, rather than vi editable files. That was more or less universally condemned: Either, he should provide data as a vi editable text file - in the class of XML, CSV, YAML, JSON ... - or I should develop a tailor-made domain-specific data entry application, doing a complete validation of all input data. Thinking that an Excel sheet might contribute to validation, whatever checks were added, was just naive and worthless.
I accept the arguments for a data entry application, as long as we recognize that text based binary formats such as XML, CSV, YAML, and JSON, are unfit for human consumption. They are binary: You have to be concerned about the representation, with regard to use of special characters, quoting, length restrictions, ... The contents isn't free. So, let's make data entry applications that are free. How easy is that, with XML, CSV, YAML or JSON as the user level data entry format?
Once we have come that far: Why not use a truly binary format, most likely TLV based? If you admit that the user should never edit the file directly, neither with vi nor cat 0 > file, what is then the advantage of using XML, HTML, CSV, YAML, JSON, ... ?
If anyone insists on obtaining the information in an "editable" format, generating it from a binary TLV representation is usually quite trivial. For the applications I manage, I can easily provide stored information in either "editable" format, or even (to some degree) accept input in those formats. Yet, any "editable" format is secondary. Simple tree structures, those that you can easily edit using vi (or cat 0 > filename) are easily handled, but if the data structures require cross-linking, you may need to use a some domain dependent data entry (or data manipulation) tool. You just can't handle complex structures neither in XML, YAML nor JSON; they have to be managed with specialized tools.
The essential point for this discussion: XML, as well as other "editable" formats, is completely unsuitable for anything but the most primitive data entry. I'd much rather read a table of input data from an .xlsx file that I can preview in Excel than from a an XML, CSV, YAML or JSON file that I can preview in vi (or for that sake: cat filename > less).
That's me. I recognize that others recognize vi as the ultimate data manipulation tool. (In its days, managing peek and poke was also considered essential to be recognized as truly qualified.)
Sometimes, I feel like an old, stubborn, ancient grandpa. At the same time, those vi (or cat 0 > filename) guys really belong to the generation before me
Last Visit: 31-Dec-99 19:00 Last Update: 6-Mar-21 2:36