WSS and ASP.NET

posted on 2005-06-23 at 00:35:58 by Joel Ross

I installed WSS as part of my Biztalk ramp-up - Biztalk uses it for some of it's services, such as BAS.

Anyway, it took me a few days before I realized how much WSS messes with the rest of your sites. It locks down ASP.NET development to the point of unusability (yes, I know that's not a word, but it should be!). You can't debug your apps, and once you turn that back on, you can't use session. Until you turn that on too! Oh - and before you do all that, you have to tell WSS that the virtual directory you're building your ASP.NET app in isn't a WSS managed directory.

After the frustration of the past week, here's my advice. If you can avoid installing WSS, do it!

Of course, if you're reading this closely, it's probably because you found it doing a search, and you don't have a choice - you need ASP.NET and WSS to work together. It's really not that bad. First, like I said, you have to add your desired virtual directory to the list of excluded directories in WSS. Once you do that, you have to add a few things to your web.config.

So what do you have to add? The first is a trust level. Under System.Web, you add the following entry:

<trust level="Full" originUrl="" />

Then, if you want to be able to use session, you have to add the following item:

<httpModules>
?<add name="Session" type="System.Web.SessionState.SessionStateModule"/>
</httpModules>
?
<pages enableSessionState="true" enableViewState="true" enableViewStateMac="true" validateRequest="false" />


I think it's definitely worth noting that I found this issue while trying to do two things - first, get a pre-existing app up and running after installing WSS. Second, I was trying to create a web service from an orchestration in Biztalk Server. This post, by Bryan Corazza, was a huge help. I still get intermittent errors, but this post got me past constant errors.

Categories: ASP.NET


 

How to Find People With Passion

posted on 2005-06-23 at 00:34:30 by Joel Ross

As NuSoft continues to grow, it sounds like we're looking more and more into hiring people right out of college. Most of our current hires are experienced hires, and the process is different. You can't really expect someone out of college to have a ton of knowledge about what types of things you're doing. In college, the solutions to your coding problems are predetermined. In reality, that's not the case. Because of this, you have to interview the two different groups differently.

Anyway, Joe Kraus has a nice post about how you can tell if someone has passion for what they’re doing, regardless of your experience level. He asks three questions:

1. Do you have a blog? I do, obviously. Better question - why do (don't) you blog?

2. What's your homepage? This is a good question, but flawed a little with today's tools. I don't have my own custom homepage - I start out with a blank page because it's faster. I use Maxthon, and I use the short cuts where, for example, I can type "g asp.net" and do a search on google for asp.net. I don't need a custom homepage to navigate quickly to what I use. So the question isn't quite right, but it will elicit the information you want.

3. Do you contribute to an open source project? I don't, but it's still a good question. I read somewhere else about asking people what piece of software they've written that they use on a daily basis. Passionate developers always seem to write their own tools to make life easier. Personally, I wrote my own blogging tool because I couldn't find a tool that did two things: Produced good XHTML, and allowed me to post to multiple blogs at the same time - regardless of blogging software. As for open source projects? Well, I have plans to release my blogging tool to the open source community - I just haven't gotten around to it yet!

These questions are good ones that can be combined with some more technical questions to get a more overall feel of how well a candidate will fit into your culture, which is really what you're looking for in a new hire.

Categories: General


 

Global Bank Integration From the Patterns And Practices Team

posted on 2005-06-23 at 00:02:20 by Joel Ross

I saw a demo of this at Tech Ed, but now I found it on MSDN. The Patterns and Practices team has created a sample that covers how you can use Integration patterns effectively.

They did a couple of very cool things with this. First, you can view a flash demo of how it works - that means you can still get all of the benefits of learning the patterns without the pain of installation. The second is how big the real solution is - you can install it across 6 different servers, and it contains 40 projects. Very cool!

Categories: Development


 

Continuous Integration With MCMS

posted on 2005-06-22 at 23:57:19 by Joel Ross

I've started working part time on a very interesting project. It's a CMS project, but it's focused around a C++ back end system that the CMS site will interact with. It's all COM-based, which means we'll be using COM interop. Fun!

Anyway, my role isn't really to build the CMS site. I'm developing the framework we'll use to build CMS templates, as well as setting up continuous integration so we get automatic builds.

Well, I got the build process done today, and there's parts that I'm happy with and parts that I'm not.

Let's get the part I'm not happy with out of the way first. Building the project is a pain. I'm currently using an exec task in nant to call devenv. I tried using the solution task (I'm using .85 rc3), and it can't build it correctly - at least not consistently. I had issues with certain obj files being given incorrect names, and therefore, the linking doesn't work correctly.  So I had to resort to shelling to devenv. It sucks because you don't get any output from devenv back into nant or CruiseControl, so you don't really get an idea of why the build is failing - just that it is.

Now for the cool part of the solution. I'm not sure how most develop with CMS, but we're doing all of our development against a local CMS database, and we migrate our changes to a dev server. Usually, that’s a process - export your changes from your local database in an sdo, and then remotely import those into the dev database, which is a remote server, and you hope you have permissions to get to it. Then, you hope the dev build is up to date, and you hope that all works when you test it out.

Here's how we've changed this process to be more automated. First, your check ins ensure that the dev build is always up to date. As far as importing the sdo file - our build server is in our DMZ, meaning you can't get to it directly from our internal network - at least not for file sharing or database connection purposes. So we decided to automate this process - you export your changes, drop them in a folder in source control, and the build process monitors that folder. It gets the sdo file, imports it into the dev database, deletes the file from source control, and then exports the full CMS structure (it's small, otherwise we would be more granular about the import/export process). That file gets checked into source control, so all developers have access to the latest sdo for the whole CMS site.

To get this set up, I used the code from Mark Duant's post about a helper he wrote that uses the CMS publishing interop API exposed by CMS. I modified it a little bit - it uses the CMS interop DLLs directly, rather than the wrapper DLL he mentions, and it goes against the server site deployment API, not the client site deployment API. It also accepts command line options to specify whether you want to import or export, as well as a file to import, or a file to export to. It's pretty rough, but it works.

Anyway, the more I get into build processes, the more things I see that you can do. Next, I'll be developing a build plan for BizTalk, which will probably include automatic deployment to a dev server. Gotta love the power of CI!

Categories: Development


 

Internal Blogs

posted on 2005-06-22 at 23:54:06 by Joel Ross

I've been thinking about the usefulness of internal blogs a lot lately, and there's definitely some value in them. Apparently, I'm not the only one who feels this way. IBM has 3600 internal blogs, and Kevin Briody recently posted about how internal blogs at Microsoft could be used.

First, some history. The Sagestone blogs started as an internal blog. Then, one day, it was public. There was some scrambling to ensure that no confidential information was exposed - something that's OK for internal consumption, but not external consumption.

When the blogs were internal, I think posting was a lot easier - you could post much more detailed code samples without fear of conveying private information. You could easily inform your peers of recent client wins, or detailed project status. It's also a good way to communicate with your team - recently, Scott Hanselman posted an email he had to send around - this could have just been a post on an internal blog - if everyone on the team buys into it.

Kevin's post has easy ways to get that buy-in by suggesting private-labeled versions of an RSS reader that has a pre-defined OPML already available. On the other hand, isn't Newsgator in the process of testing an Enterprise solution? Could that be used for enforced delivery of important internal blogs?

Anyway, I think there's some unrealized benefits from internal blogs if they can be discovered easily. If not, they become as useful as most intranet sites - you'll find it when you need something, but you won't use it on a regular basis.

Note to NuSoft employees: I heard rumors of internal blogs coming, but nothing more than that yet. Personally, I think it would be awesome. There's lots of code I could share there!

Categories: Blogging


 

Virtual PC and File Locations

posted on 2005-06-22 at 22:05:00 by Joel Ross

I've been using VPC as my primary development environment for a couple of months now, and I've found my first annoyance. I thought I had a solution, but it didn't work out as I'd hoped.

First, the problem. I work on multiple clients, and often, I want to look at code that I wrote for one client to use it as a sample for another client - more often than not lately, it's with build files and how to do something I've done before. I can't really run two VPC's at the same time efficiently - I dedicate as much RAM as possible to each VPC, so I can only run one at a time - and if I'm working on client A and need code from client B, my only recourse right now is to shut down VPC A and start up VPC B, get the code, shut down VPC B, and start up VPC A and put the code into VPC A. With saving state, it's not as bad as it could be, but it's still not as fast as I think it should or could be.

One other minor annoyance is search. I use MSN's desktop search to index my files - it can't index into a VPC hard drive, so I lose the ability to search my files that I'm using for clients.

So what were my solution options? Both involved moving the main files to the host disk, where I can get to them from my host machine, and they can be indexed. I considered moving files to a drive shared from the host, or using Junctions. Junctions died a quick death. My idea was to map C:\Source to z:\Source, where the z drive would be a shared drive through VPC. Since VPC treats shared drives as network drives, and Junctions only work with NTFS volumes, Junctions didn't work out as hoped. Next, I tried moving files to a network share. That sort of worked. Everything works - but VS.NET gives errors saying compiled output on shared drives will cause your code not to be trusted and may result in unexpected errors. Not exactly what I want, either. So now I'm stuck.

Now, moving files to the host machine goes against the idea that a VPC is a standalone entity that you can hand off, but I'm not using them for portability. I use it for separation of client configurations - for example, I can't get the Vault 2 client and the Vault 3 client to play nicely together, especially when it comes to VS.NET integration. I have clients using both, so I would go through headaches switching back and forth. Anyway, being able to copy a single file to get a whole set up is not my goal - I'm ok with some setup to move a VPC from one machine to another.

Now, the solution I want: I want to be able to make my machine think that c:\source is a real folder, when in actuality it's a network share. I have yet to be able to figure out how to get it to work - but if you know, let me know. I'm just about out of options!

Categories: General


 

Scott's Ultimate Tool List

posted on 2005-06-21 at 09:52:27 by Joel Ross

If you haven't seen it yet, Scott has updated his list of tools that every developer should look at. This is by far the most comprehensive list of useful utilities I've seen thus far.

I looked through my utilities I install, and I only found one that something from his list doesn't cover - Active Ports. This shows which ports are in use, and by what process. I've used this a few times when I was wondering why a certain port wasn't available.

Of course, I found one thing that wasn't on his list, but he has many, many that aren't on my list, so you should check it out. I already downloaded and installed TaskSwitchXP, and although it takes some getting used to, it's very cool. I'll be adding this to the list of things to install on my base as well as my VPC images.

Thanks, Scott! You're doing us all a huge favor!

Categories: General


 

Framework Development

posted on 2005-06-21 at 09:50:42 by Joel Ross

Ben Carey has a question about frameworks, and is seeking feedback about how we keep our frameworks clutter-free.

I've worked on a couple of frameworks, and I've seen both the good and the bad in them. The bad usually results when the framework isn't tightly controlled. What I mean by that is that everyone has the opportunity to modify the framework, even if they don't have an understanding of what the framework is for.

You see, this is a concept that not everyone grasps. Developing a reusable framework is a different process than building an application. The framework needs to be generic enough that it handles all situations and easy enough to use that it doesn't hinder development. By adding one-offs to frameworks, you decrease the generalness of the framework and make it harder to use - you have to work around the one-off implementation.

So how have we controlled this in the past? On one project, we had a solid team that understood what should go in and what shouldn't go in. It was pretty easy to maintain. But on another project, we actually created a separate repository for the framework, and only gave a few developers access to the framework code. Part of the build process created an installer that the rest of the developers could install and reference in their projects. What got into the framework was tightly controlled.

In practice, that is. When I left the project, the framework was pristine (in my opinion), but eventually, developers who didn't have the framework mentality got a hold of it, and I've heard that it's gotten to be a bit more than it was intended to be.

The moral of the story? Ensure the proper people control the framework, and that they are firm about how it's maintained. Once you get into a position where the framework is getting messy, it's tough - near impossible with large applications - to get back to a fresh state, so ensure it doesn't happen in the first place!

But we know it happens, so how do you get back? I think you have to take the painful task and have someone who (again) knows what the purpose of the framework is. Then, they have to go through it and remove any code that doesn't belong there. There isn't a magic solution here - you're going to break things. The code that gets pulled out is relied on, and therefore must be integrated back into the system in other, more appropriate places. It's painful, but necessary.

Categories: Development


 

Multiple Projects in the Same Virtual Folder

posted on 2005-06-20 at 00:25:06 by Joel Ross

There's a knowledge base article about how you can split a web project into multiple projects. It's not the simplest process - you have to edit project files by hand - but it would be nice to be able to split certain sections of a web project into their own projects.

Personally, I think this would force me to think much harder about how I want to divide up my projects and make sure that over the long haul, I don’t end up with a bunch of garbled code. I like to try to keep my code segmented, but inevitably, something will come up and the quick and dirty solution (given the timeframe) is to mash something into a spot it doesn't belong. By logically separating out functionality from each other, it would make the temptation (and the time savings) less - and that's a good thing!

Now, to go back and rework a few projects to use this methodology. Or at least test it out. I'm sure there are pitfalls here - like circular references if code is mashed together - but nothing a little refactoring couldn't solve!

Categories: ASP.NET


 

RossCode Weekly #005

posted on 2005-06-19 at 23:58:44 by Joel Ross

It's Father's Day, I'm watching the Pistons, and it's been a long week. I'm behind reading feeds (again), and there doesn't seem to be much big news this week. Of course, I'm probably missing some stuff, but these seem to be the big news makers.

Google to launch a PayPal competitor. If anyone can come in and have a big impact right away, it's Google. If they follow through and launch it, we'll definitely give it a look.

Microsoft China is blocking words such as "democracy" and "freedom" and quite a few folks are up in arms over this. If this was in the U.S. then I could see what the uproar is all about, but Microsoft is a corporation, not a country. It's not their responsibility to judge the laws of a nation such as China as right or wrong. What if Microsoft didn't censor those words, and people died because the government of China found those people? Would Microsoft be to blame for their deaths too? Now, don't get me wrong. I think freedom of speech is the most important freedom we enjoy in this country, and if they were censoring here, that would be a problem - or would it? They own the servers, and they can do whatever they want with content on their servers - as long as they disclose their intentions ahead of time. One thing I didn't see mentioned in a lot of the discussions - both Yahoo and Google are doing the same thing! Doesn't make it right or wrong, but it does add to the context of the attacks.

Yahoo is getting into VoIP. I heard the latest version of Yahoo Messenger already had this, but by buying Dialpad, I guess they'll be able to step that up a notch. This should end the speculation that Yahoo is going to buy Skype. It sounds like AIM (AOL's standalone instant messaging client) has plans to add VoIP features soon too.

Since I didn't find much this week, I'll include two contests, neither of which I plan to enter, so you still have a shot! Both will win you trips to PDC and both are being put on by Channel 9. You can either blog your way there or you can code your way there - whichever suits your fancy.

I'm sure there will more news this next week - there are supposed to be some big announcements at Gnomedex, so I'll try to stay on top of those!

Categories: RossCode Weekly


 

<< 1 ... 71 72 73 74 75 76 77 78 79 80 81 ... 124 >>