Pros and Cons of VB, C#, and "concept encapsulation"

mskeel

Well-known member
Joined
Oct 30, 2003
Messages
913
Location
Virginia, USA
Prompted by the differences between how Visual Studio 2005 handles designer generated code in Visual Basic and C# in this post, a conversation of what was hidden in VB versus C# versus IL versus Assembly began followed by a debate as to whether it was a good thing that so much information be hidden from VB developers and your opinions on it. This is a continuation of that conversation and debate.
******

Marble_eater, technically, at least in the C++/Assembly world, everything reduces down to do-while loops. Loop performance is going to be affected by a number of factors including but not limited to function calls within a loop (as you said) but also paging/memory use and how you access your data. As I understand, the .Net compiler does a pretty good job of optimizing loops through techniques such as loop unrolling and in-lining some methods, but as your experiment shows, there is still a significant hit in the overhead of declaring and using an iterator. Is this really a deal breaker? I guess the answer is that it depends. All I have to say is Moores law, how often do you deal with 100,000,000 entries in an ArrayList, and I would hope a person in a sitution needing that much speed would know a thing or two about programming and how a computer works.

For the, initial target audience of VB, I think it is 100% unnecessary to know this information and it would just get in the way of the purpose of VB -- to make it easy to quickly crank out great software. Delegates and event binding get in the way. Designer code gets in the way. Lack of foreach gets in the way. For that matter, pointers and memory management get in the way.

Just remember that every tool in the tool box has a purpose (otherwise I hope it wouldnt be in your toolbox!), but every craftsman always has their favorites.
 
I am just going to (re)reiterate--the loop example is just that. An example. There can be other kinds of benefits from knowing what is going on behind the scenes.

Maybe that kind of attention to details is what separates a good programmer from a diligent programmer. Either that or a normal person from a person who has OCD. I like to have a more thorough understanding of what is going on when I write a program (or do anything, for that matter). A richer understanding of details can help you see things in a different light and show you more possibilities.
 
marble_eater said:
Maybe that kind of attention to details is what separates a good programmer from a diligent programmer.
My opinion is that it seperates the mediocre programmers from the great ones. Which is why, even though VB seems somewhat limited and not as powerful, if you know what you are doing you can still make some really amazing programs. And in far less time and stress than working with C++, considered by many to be the most powerful language of them all.

marble_eater said:
Either that or a normal person from a person who has OCD.
:D That too!
 
As a highly versed x86 assembly language programmer...

The problem with "hidden" information is that often that information is useful knowledge for making (or avoiding!) algorithmic changes that would alter genuine real world performance.

As most decent programmers know, the big optimisations are algorithmic changes rather than code reordering/inlining/whatever tweaks.

Is that list iterator on that class in the class library thrashing the L1 cache or not? (basically, how local is the auxiliary data needed for iteration management .. cant know.. its hidden from me.. a derived class also might behave differently)

Thats important info that could easily help me avoid trying multiple strategies (ie, "one item at a time, many operations" vs "one operation at a time, many items" vs "neither is acceptable, must get to rethinking the overall plan")


of course.. none of this means anything unless performance is an issue.. but I think that more often than not, most GOOD programmers are slamming their head against the performance wall frequently (because that sort of experience is PRECISELY what made them good to begin with!)
 
As a former C/C++ I can appreciate where you are coming from, but I think its time to move on from this kind of thinking. One of the caveats we all accepted when we started using higher level languages was that we are going to lose some control and that wed have to trust the compiler to take care of our code in such a way that it wont hurt our program. Without this trust, were back to hand rolling assembly to get the "best" loop performance possible.

Does an iterator thrash on the L1 cache or not? As a .Net developer, that isnt my concern. Thats the concern for the person building the latest and greatest optimizing compiler for me. Thats the concern for that sorry sucker still stuck hacking C code in some basement someplace -- wishing he didnt have to declare his variables at the top of his functions and longing for an iterator to make cycling through his malloced arrays easier to read. As a user of a modern computer, that also isnt my concern. If you are experiencing a noticeable slow down in computations due to L1 thrashing then you either need to get a better computer or not write your program in .Net.

Trusting that the compiler is taking care of reorganizing my elegant, human readable code into efficient, machine readable code allows me to use my brain power to tackle much more difficult and important issues -- like not introducing bugs into my system and making my system usable by people. It lets me concentrate on the algorithms Im writing, and not the tricks I can use to make my algorithms work for a specific architecture or machine configuration.

Losing sleep over all of this stuff in the .Net world is a waste of time. If you are that concerned with performance you need to use another language.
 
mskeel said:
Without this trust, were back to hand rolling assembly to get the "best" loop performance possible.

"Best" isnt a stated requirement.. but such issues can have very large effect on performance (Im not talking about small performance increases here, Im talking about large ones... still not approaching "best")

mskeel said:
As a user of a modern computer, that also isnt my concern. If you are experiencing a noticeable slow down in computations due to L1 thrashing then you either need to get a better computer or not write your program in .Net.

You know.. ive heard this sort of "reasoning" for several decades.. it wasnt true in 1986, it wasnt true in 1996, it isnt true in 2006, and it wont be true in 2016.

If your programs have no time-critical code then good for you.

Some of us bang our head against the performance wall regardless of how fast our processor is... because we desire to bang our head against it! A faster processor simply means that we can do more, and we fully plan to do so.

mskeel said:
Losing sleep over all of this stuff in the .Net world is a waste of time. If you are that concerned with performance you need to use another language.

Ive been hearing this for decades too.. it used to be said about C, then about C++, always about the various Basic and Pascal compilers (incorrectly), and so forth...

BTW: .Net isnt a language
 
Rockoon said:
"Best" isnt a stated requirement..
But whats best for your program is. I put that in scare quotes to highlight the interpretation of tradeoff. In this case you could be sacrificing maintainability for performance. Is your program better or worse for it? Id hope you always produce the best work you can at that time. Personally, Im of the opinion that maintainability is more important than performance so Im willing to make certain sacrifices. I think this is just where we differ in opinion.

When Im writing code in .Net, Im usually writing business-style applications. Meanwhile, when Im writing code for embedded real-time systems, Im writing in C++. Why? Because I know that no matter how hard I try, any .Net language I use is going to run 5 to 7 times slower than the same code compiled natively (win32, gcc, etc.). Thats just a given. Thus if preformance is *that* important, youve already got a handicap with .net so what are your really trying to accomplish?

Dont get me wrong. Im not advocating that code be written with reckless abandon. If anything, the experiment that marble_eater ran earlier showed that it is important to carefully consider the decisions you make in your code. Im just of the opinion that its silly to get bent out of shape trying to squeeze extra performance out of pure .Net code just for the sake of running a few milliseconds faster when in the grand scheme you could have leaps and bounds improvements by simply choosing another language. Im a fan of concept encapsulation and making things as easy as possible for people to write code. Why bog a manager down with curly braces and semicolons, or arrows, dots, and stars if all he wants to do is write a quickie program to help him sort his email or something? Why make anyone worry about memory management when it clearly isnt needed?

Rockoon said:
BTW: .Net isnt a language
.Net isnt a language per se, its a platform. But seeing as how VB.Net, C#, managed C++, Python, Delphi, and many other languages can all be compiled to the same IL which is then compiled Just In Time by the .Net CLR for execution on the .Net framework, you can make certain generalizations about the various .Net languages because they do boil down to the same intermediate language. Plus, when youre using the various classes offered by the .Net framework, its really just a matter of semantics:
Code:
Private Sub DoSomething()
   Dim listy As New ArrayList
   For i As Integer = 0 to 50
      listy.Add(i)
   Next
End Sub

C#:
private void DoSomething()
{
   ArrayList listy = new ArrayList();
   for (int i = 0; i <= 50; i++)
   {
      listy.Add(i);
   }
}

Code:
private: Void DoSomething()
{
   ArrayList^ listy = gcnew ArrayList();
   for (int i = 0; i <= 50; i++)
   {
      listy->Add(i);
   }
}

Library/Class support is vastly improved over previous languages such as C++ (if you
 
mskeel said:
When Im writing code in .Net, Im usually writing business-style applications.
I totally agree with that. When I do some programs for my company, my boss doesnt care that I can process a 300k csv file in 2.5 seconds instead of 3. He want the job done. So even if this file would take 10 seconds it would be okay. So why should I try to get those milliseconds in it? There is no reason unless this task is more critical.

Of course, speed is important. My boss is really happy that we can generate invoices in PDF in 3 hours instead of 2 days with the prior system. But we are talking about huge improvement to code that was done by a really poor VB6 programmer.

Im not interested in gaining 23ms for a task that last 2 sec. Im not interested in gaing 1min in a task that last 3 hours.

But Im interested in cutting those time by 50%. So speed is relative. Since .NET wont allow me to do those kind of gain (neither a C++ software would), I better do a program in .NET in 1 day than to build a C++ software in more than a day (including debugging, verifiing memory leak, etc...).

When you talk about speed gain, you always have to ask yourself "how much time will it take me?" and "Will my boss is ready to pay for that?". Because most development are paid and company dont care for a process that is processing 0.001% faster (unless time-critical process of course).
 
Concept encapsulation doesnt have to be equal to hidden information..

Programming languages represent abstract machines.. this includes Ada, Basic, C, Delphi, Forth, Java, Lisp, Pascal, Small Talk, etc.. they all have represented concept encapsulation without hidden information in the past.



C++ compiler will produce code "5 to 7 times faster" ???

Maybe in very specific examples with worst case code vs best case code.. most probably due to *hidden information* not indicating that you were infact jamming one of the worst cases through the language/compiler/framework that you were using..

The big micro-optimisations these days are typically

(1) memory cache related or
(2) those cases where ASM offers abilities not represented by an abstract language.. operations like rotate through carry which simply cant be coerced out of any high level language that I am aware of ..
(3) when the compiler consistently makes a bad choice such as GCCs habitual conversion of constant multiplications to a series of shift and adds (small gains vs big losses) .. but this is a special case like everything else.. compiler-specific not language-specific.

.. so no, C++ isnt going to give you 5 to 7 times more bang for your buck over the long haul.. more like 1.2 times at the most... these are all abstract languages and if its being done with one up-to-date compiler to good effect, its also being done with the other up-to-date ones (less the special cases)
 
C++ compiler will produce code "5 to 7 times faster" ???
This refers specifically to the hit caused by the JIT compilation of the IL versus the already compiled and ready to roll machine code of gcc/g++ or Win32 VC++. At the same time, subsequent executions of the same .Net application will be faster because of caching -- but thats only when you run the same app several times in a row. I guess I should have said something like "generally" or "for the most part" .Net will run 5 to 7 times slower than similar code complied with g++. The bottom line though, it takes time for the CLR engine to crank up to speed before its even able to start to work on the application you want to run.

I will concede that my stats might be out of date with regards to newer versions of .Net, Java, and gcc/g++. But that definitely was true at some point in the past year or two. Everything changes so quickly its just hard to keep up sometimes. ;)


Rockoon said:
...they all have represented concept encapsulation without hidden information in the past.
And Ive heard this argument made over and over again as well with each new iteration of a higher level language. Lets take a step back for a moment, though. Really, when you take a language to a higher level of abstraction, youre going to be hiding choice pieces of information that will, at least in the author of the languages opinion, make the language better in some way. It seems that a persons perspective on how much or how little is hidden (and how positive or negative that is) is determined by that persons first language. This is one of the reasons why I think its a rotten idea to teach Java to students as their first language. So much is hidden from them that they dont even know exists. Id make the same argument for not teaching C#/VB .Net to students as a first language.

In order to make details easier for the programmer you have to hide something. Otherwise concept encapsulation cant exist and the very idea of abstraction doesnt make any sense.
 
Back
Top