From the monthly archives:
Updating my list of hotfixes for DPM (and other) environments. It's been a while!
It's been some time since I revisited the need for Windows 2008 R2 SP1 hotfixes. The last list I published was in August 2011 - and it's held up pretty well overall! The original purpose of the list was to provide the essential hotfixes for a System Center Data Protection Manager 2007 or 2010 install on Windows 2008 R2 with Service Pack 1. I've allowed other fixes to appear there - either because I felt they were important to stability or performance, or because they fixed an issue I'd observed. I went into some detail on this in the last post, but I do like to have a bunch of fixes in my kit for both general and more specific purposes.
This list provides a number of new hotfixes that have appeared since. These additions are constrained almost entirely to issues that can affect DPM and other backups. It's by no means comprehensive, just a useful list of important fixes. As with the previous article, I've coloured the hotfixes most relevant to DPM in red. A number of these hot ...
We do love our fads, don't we?
Let me start this post by saying that, as with all fad.. err, trends, I'm not totally against BYOD. I've just been in IT for long enough not to jump on the bandwagon of every damn "trend" that comes along, because they come along often.
What is BYOD? Bring Your Own Device, or in other words, staff bringing their own smartphone, tablet, notebook, or similar devices to work. It's an idea that's gained quite some traction with marketers, journalists, and C-level execs. It's not so far different from the classic problem of a high-level exec buying a new shiny device - outside of the Standard Operating Environment - and insisting that IT make it work. It's just spreading that out to a much broader degree, following the innumerable "trends" of times past.
Server-based computing and thin clients never really set the world on fire. Server virtualisation didn't reduce complexity or server sprawl - in the sense that it's now all too easy to run up a new virtual server, and you now have a whole ...
System Center 2012 is here, and it brings new licensing! Here's what you need to know.
By now, you may have heard that System Center 2012 has reached GA (General Availability) stage. It's been available for download for a little while, but Microsoft naturally wanted to align the announcement with the Microsoft Management Summit (MMS) that's happening this week.
System Center is, of course, Microsoft's integrated management platform for IT, and one of its fastest growing product lines in business terms. That's no accident, either - management is the single most consistent challenge across IT shops, regardless of size, technologies, and headcount. We've come a long way from the days where Systems Management Server (SMS) was the only Microsoft offering in this regard - and even since the introduction of Microsoft Operations Manager. These products were clunky and limited in contrast to their modern counterparts, System Center Configuration Manager and Operations Manager.
Times have moved on; now the System Center portfolio also covers backup, virtualisation, service ...
A new friend in the System Center MVP stable!
While AuTechHeads isn't focused specifically on Microsoft technologies,
it's certainly a big part of the IT landscape in the ANZ region, and we do have our share of Microsoft experts around the joint. I didn't get around to posting this earlier, but I was privileged earlier this month to be introduced to a new MVP for System Center Cloud and Datacenter Mangagement, Rob Ford!
In line with the System Center 2012 release, Microsoft recently rolled the various MVP areas for System Center up to just two - System Center Cloud and Datacenter Management, and System Center Client Management and Security. Client Management and Security covers Configuration Manager and Endpoint Protection, while Cloud and Datacenter Management covers the rest of the System Center suite. My own MVP award for Data Protection Manager was therefore rolled up into the Cloud and Datacenter Management area, and until now I was the only one in Australia and New Zealand.
I don't have a full bio for him, but Rob specialises in S ...
Keep refreshing for updates today! 12th April 2012
I had to rush off to the airport and crashed out, back home in Adelaide now. It was a very interesting event, and was great to get the opportunity to talk to some key HP staff. I'll summarise the whole event in a few days once I've absorbed it all.
Bit of a gap as this section was particularly technical around layers, zones, repositorys, pools, catalogues - you get the idea :)
This requires a common foundation. There are three layers for an Integrated cloud platform to cover all IaaS, PaaS and SaaS (hmm most things seem to be in threes today) - Demand - User Interraction, Deliver - Service Orchestration and Supply - Resource Operation.
Architecture Deep Dive for HP Cloud:
IT becomes the service broker, and also needs to choose where to put what. It should also be designed to be able to be moved from one environment to the next.
If someone uses your hosted severs for an attack, who is at fault? The provid ...
Make sure you refresh the page for the latest updates.
I have just realised that the times are Adelaide times, not local :) Lunch time, so after that I'll continue with Part 2.
HP Enterprise Cloud Services: Global Availability, Communications & Collaboration, Enterprise and SaaS Applications. One of the bigger benefits is Testing as a Service which should dramatically decrease configuration and setup times. The big goal is to doing the right scale for the right cost. HP do end to end migrations.
The current evolving state of hybrid delivery is a mix of traditional, private, managed and public. The future envisioned will be using common architecture, coverged management & security, open & standards based, develop once - run anywhere, and flexibility & portability. This is needed to reduce complexity of managing too many different evironments by too many different methods.
HP Converged Cloud is built on OpenStack technology, and works on a ...
This review will not use the word 'Phablet' or 'Tone' to describe this device.
I have been trialling out the new Samsung Galaxy Note. For those of you who haven't heard or seen this phone before - it's huge. Huge compared to any other phone you've seen with a 5.3” WXGA (1280 x 800) screen. Check out the official specs here: http://www.samsung.com/global/microsite/galaxynote/note/spec.html?type=find
The first thing that came to my mind when deciding if I wanted to test this device was this Dilbert comic:
So, can a device still be a good phone, while being large enough to be a tablet? After playing around with it for a while, my personal answer is 'yes', but it's still not the best solution for every scenario.
The first thing I noticed about the phone after taking it out of the box, was the size. Suprisingly the phone is quite light, thin and study despite this. After realising I also needed to put the battery in, it was still quite light. Pow ...
Data. We save it, store it, back it up and access it as we need to. We grow larger and larger storage capabilities to fit it in, we implement backup regimes with SAN / NAS and Virtual Tape Libraries to avoid "slow" transfers to tape. We end up using tape anyway. Security is often overlooked as the clients are complacent, but now is the time to at least educate them of alternative options. There is no such thing as too much security, each vendor has a role to play in an environment, and they are only too happy to assist.
Data. We save it, store it, back it up and access it as we need to. We grow larger and larger storage capabilities to fit it in, we implement backup regimes with SAN / NAS and Virtual Tape Libraries to avoid "slow" transfers to tape. We end up using tape anyway.
How many companies and departments have a disaster recovery solution in the works, either being implemented or planned? What happens if that strategy is called into action due to a hardware failure or malicious action?
If I told you that the main cause of data loss through deletion or removal by malicious action is actually from an internal source, ignoring the Internet and external users for a moment, how can you ensure that it doesn't happen? What implementations are there in the market that account for internal user attack?
Most firewalls sit at the gateway. they probe for signatures that identify external behaviours, they don't look back in. If the user is behind the gateway and firewall, in the "safe" zone, how can a sy ...
Intregration is a scary thing to many Admins and Engineers...
An opinion piece here, so please poke holes and post criticisms below.
Lately I have been going through a lot of system changes at work. That is to say, more than normal, and most at the early stages. We've been stuck in a state of limbo, mainly because the several systems we want to upgrade or change all talk to each other in one way or another. I'll first briefly outline one house of cards, and then move to what should have been done better, generally speaking (or typing as the case may be).
We are on Exchange 2007, and want to go to Exchange 2010. That's not too difficult you may think, you can build your whole new Exchange environment and move a few mailboxes over for testing, then just do a mass mailbox migration over the weekend and everything's great.
This would be true, if several other systems weren't leveraging off of Exchange 2007. Firstly, voicemail. Our phone system will pass unanswered calls through to the Unified Messaging Exchange 2007 server, which means ...