Home

  • CUGC XL at Yankee Stadium – What a Lineup! Here’s What we Learned.

    by Todd Smith, Citrix

    On a beautiful summer day at Yankee Stadium, an eager collective of CUGC XL in New York attendees were greeted by a spectacular view of the hallowed field that is home to the New York Yankees. As a Red Sox fan, it was an emotional, and inspirational sight to see. The day was filled with some of the most trusted and innovative companies that are part of the Citrix ecosystem. Participants came from across the CUGC chapters in the Northeast United States, but we also welcomed guests from chapters in Europe and even Australia!

    CUGC XL in New York emcees Sarah Vogt and Steve Elgan
    CUGC Steering Committee President & Orlando CUGC Leader Sarah Vogt with Omaha CUGC Leader & CTA Steve Elgan

    Leading off the CUGC XL in New York, emcees Sarah Vogt (CUGC Steering Committee president & Orlando CUGC leader) and Steve Elgan (Omaha CUGC leader & CTA) welcomed guests to the venue and thanked the sponsors and participants for joining us. Arrayed throughout the room were our event sponsors: Citrix, ControlUp, eG Innovations, Flexxible IT, Google Chrome Enterprise, Goliath Technologies, IGEL, Liquidware, Nutanix, and Stratodesk.

    In a baseball stadium, we have to start with the lineup, and what a lineup we had to present and share knowledge.

    CUGC XL in New York speakers Joe Kim and Christian Reilly, Citrix
    Citrites Joe Kim & Christian Reilly

    After the welcomes and a brief sponsor introduction, sluggers Joe Kim and Christian Reilly from Citrix were next in the batting order. Christian is the VP of Innovation, while Joe owns all Product Engineering, and is the CTO for Citrix. Christian worked the count with his questions about “How do we know what to build?” This led to his discussion on the development cycles of products, how it starts with some speculation about what could be possible, followed by ideation, research, development, finally tying it all together as a product. In some cases, we have a viable product or feature that is needed, in other cases we have spent time learning and getting smarter. 

    Christian talked about the challenges of Hybrid Work, and more importantly about how IT is often in a position to “make it work despite ourselves,” which really got the room buzzing as people shared stories at the tables about how IT was either well prepared, or scrambled. The spirit of the discussion was that IT rose to the challenges, but there is still a lot of work to do. The biggest challenge is changing culture. This is a bigger challenge, especially on the executive level.

    Joe Kim then led a discussion on the challenges of remote or hybrid work. In the context of managing a team that is truly hybrid, with members in-person and others who are remote, challenges with participation, giving everyone a voice, and being effective are all part of the reality we have today. Referring to recent surveys of both IT executives and IT practitioners, some similarities rose to the top of the pile. These included DaaS as a key enabler for hybrid work. Word association in a live session with CIOs that DaaS and Hybrid Work are connected. Being able to allow employees the flexibility to truly work from anywhere, and not lose out on the experience, security, or access.

    Christian then followed up with a final discussion on Simplification and Complexity, and how they can be related, yet remain different. Simplification of roles and tasks can sometimes require more complex tools and procedures. As we move the ability of employees to work anywhere, we need to be concerned with security, access, and trust. This brings a more complex security framework, that needs to adapt to changing variables, and continue to maintain the security posture of the organization. One example cited by Christian is the consumerization of IT, and how that is impacting the way companies see IT. Looking at the three pillars of challenges that are facing CIOs today, which includes – Accelerating IT Modernization, Empowering Secure Digital Work, and Boosting Worker Productivity – Citrix is positioned to continue to help customer reduce complexity and simplify the experience for the digital workforce. 

    IT can no longer be ruled by the phrase “The more the IT department says ‘No,’ the less secure they become.” This is a trend that requires Citrix to be more collaborative with ecosystem partners, as in the relationship with Microsoft. In fact, Citrix recently announced the “Citrix Optimization Pack for Windows 365,” which allows users of Microsoft Cloud PC to get some the same optimizations that on-prem users get today.

    Swiss CUGC leaders René Bigler and Sacha Thomet at CUGC XL in New York
    Swiss CUGC Leaders & CTPs René Bigler and Sacha Thomet

    Next, Swiss CUGC leaders & CTPs René Bigler and Sacha Thomet spoke about how Citrix technologies are impacting education in Europe. The challenges of delivering applications, desktops, and services to students across the continent was a complex and critical problem, which included language support, network reliability and availability, and security programs. Citrix was able to help address and overcome these challenges.

    Career Development panel discussion at CUGC XL in New York
    Career Development Panel discussion. L to R: Sarah Vogt, René Bigler, Sacha Thomet, Steve Elgan, Martin Fox.

    One of the highlights of the day came in the form of a Career Development session, led by emcees Sarah & Steve, with CUGC group leaders Martin Fox, Sacha, and René. In this session, panelists discussed the topics of Keeping Current with Technology, Collaboration, Career Progression, and Leveraging partners. Sarah stated that she really likes being the technology thought leader more than being a people leader. Steve talked about working with his team as a leader to allow them to grow and expand their careers by working with HR to expand upon job descriptions and job titles, allowing for growth in a small team. All of the panelists agreed that career development and personal growth is an individual responsibility, but needs management support along the way.

    Boston CUGC leader Tim Mangan at CUGC XL in New York
    Boston CUGC Leader & CTP Fellow Tim Mangan

    Boston CUGC leader & CTP Fellow Tim Mangan, a stalwart of the CUGC community, founder of App-V, leader in virtualization technologies, and all around jedi shared the latest trends in Application Delivery with Citrix. While this sounds like a trip down memory lane, Tim made a lot of great points. First was that the application still matters, regardless of where the application is installed or accessed. He also highlighted that the investment in Windows OS, while not always logical, is a key part of the ecosystem that supports applications and data. With this, keeping the applications and the installers updated on a regular basis is critical to the success of the IT department, and the security of the organization. Application Delivery via Citrix not only improves the user experience and the overall security posture, but also allows the IT department to remove the app updaters, splash screens, and EULA click through notifications that users hate.

    MLB Neil Boland and Citrix VP Christian Reilly at CUGC XL in New York
    MLB CISO Neil Boland with Citrix VP Christian Reilly

    After a quick seventh-inning stretch and a drink from the bar, we settled in for a fireside chat with Christian Reilly and MLB CISO Neil Boland, who discussed the challenges associated with a large sports and entertainment organization and the daily threats, weaknesses, and opportunities facing MLB that Citrix helps manage. Neil started with an overview of his organization and what Citrix does to help him achieve his goals. MLB and Citrix is best identified as a service provider to the clubs that are part of MLB. The goal is to standardize services provided to the clubs, its partners, the club partners, and the fans. All of this without impacting competition. Expanding to the minor league systems and organizations is a goal, and presents considerably more challenges.

    Neil further discussed the challenges that MLB faces when they produce the compete in the “Field of Dreams” game. This takes place in the cornfield diamond in rural Iowa. As part of the game, they transform a small field into a major league facility, including getting the technologies of a MLB stadium to appear out of nowhere, just like in the movie. The challenge is to allow fans, the media, other clubs, and support staff to participate in the event, regardless of where they are, safely and securely. Citrix plays a huge part in this event, and it is a great story to hear.

    In closing, Neil and Christian highlighted that through the partnership between Citrix and Major League Baseball, they are enabled to deliver a better product to their fans, clubs, players, and partners.

    In closing, the CUGC XL in New York could not be possible without the All-Star team of leaders, Citrix Sales Engineers, Product Managers, Sponsors, partners, CUGC members, and the dedication of the CUGC staff. While the location was iconic, the content was the real star of the game.

    Stay tuned for session recordings!

    Until the next CUGC XL, see you on the blogs.

  • How to Setup a Nutanix Protection Domain to Replicate VMs from One Cluster to Another.

    by Ray Davis, CTA

    This is a quick blog to show the power of a protection domain in an AHV setup. First, I wanted to explain that my lab is not complex yet. Meaning I don’t have firewall rules, different VLANS, or any kind of segmentation. It’s a flat Network with 2 Nutanix AHV Host. When I started writing a blog on how to setup a Nutanix protection domain, I was doing this on a production and DR site at a previous company. It worked like a treat, and I was able to move AHV virtual machines from different clusters to a DR cluster and back with no problem.  

    But since then, I  moved on to another company where they don’t have Nutanix :(. I don’t have access to the environment anymore. My lab is 100% AHV. Why? Well, because I got into Nutanix 2 years ago or more. I love what they offer and the Hypervisor’s easy use and other software they provide. I am using the Nutanix CE version. However, the Nutanix CE version is behind compared to what is released now. I noticed in an AOS 6.0 setup that some of the options are in different areas compared to the CE version. The CE version is based on 5.18, from what I can tell. I am not sure when they will release a newer build for the CE edition either at this time. 

    This allowed me to maintain base skills set in the Nutanix realm. I know it’s not 100% solution when compared to what many are running in a Prod setup.  But It’s the same steps in a production environment. I am now rewriting this blog around my lab setup instead of what I originally had written up. Since I need to get my Virtual Machines off the other AHV host, I decided to write a little blog on protection domains VM replication. This blog shows you and gives you an idea of what it really can do. It is very straightforward. The biggest issue I had back when I was doing it in a Production setup was Firewalls. It is critical to make sure this is right. If not, you will struggle. Ask me how I know :). We all have to go through the firewall setups with Information Security or whoever manages your firewalls. Just make sure it’s set up correctly.

    As I was saying above, my lab is simple. It is just for me to test things and keep up to date with my CVADS/CVAD journey.

    In my lab, I will be referring to names called:

    NTX-Cluster-03

    NTX-Cluster-01

    I created two Single Node Clusters, so I can have the option to do failovers and try to mirror a source and destination AHV cluster.

    I was in a situation where I needed to get another Lab server. So after searching online, I found a great site called Refurbished/Used Dell & HP Servers, Hard Disk Drives – TechMikeNY

    I ended up ordering this. This is all I needed for my second host at this time.

    Items QtySubtotal
     Dell PowerEdge R630 8-Bay 2.5″ 1U ServerDELL_PE_R630_8B1$364.00
     Dell 0C34X6 2TB SSD SATA 2.5” 6Gbps Solid State Drive2TB_SSD_SATA_SFF_6G2$408.50
     Intel Xeon E5-2698 v3 2.30GHz 16-Core LGA 2011 / Socket R-3 Processor SR1XE2-30Ghz_E5-2698_V3_16C2$245.60
     32GB PC4-2133P ECC-Registered Server Memory RAM32GB_PC4-2133P8$683.20
     Dell 1100W 80+ Platinum Power SupplyDELL_1100W_80-PLUS2$153.80
     Dell iDRAC8 Enterprise Remote Access LicenseiDRAC8_Enterprise1$67.20
     Dell HBA330 12Gbps SAS HBA Controller (NON-RAID) MiniCardHBA330_Mini1$62.70
     200GB SSD SATA 2.5” 6Gbps Solid State Drive200GB_SSD_SATA_SFF_6G2$73.30
     Dell 0R1XFC I350 Quad-Port 1GBe Daughter CardDELL_0R1XFC1$21.00
     64GB SATA Disk-On-Module SATADOM SATA III 6Gbps Drive64GB_SATADOM_6G1$15.50
     Dell 2.5in R-Series CaddyR-Series_SFF_Caddies4$19.00
    Subtotal$2,113.80
    Discount-$0.00
    Shipping$55.60
    GST$0.00
    Grand total$2,169.40

     @TechMikeNY

    Here are the firewall Ports

    Ports and Protocols ANY – Disaster Recovery – Protection Domain (nutanix.com)

    Protection Domains (nutanix.com)

    You need to create a Protection domain to do this first. This is what tells Nutanix what to replication and how often.

    Let’s start by setting up a protection domain on NTX-Cluster-01

    Drop down, select data protection.

    Click + protection domain

    Please give it a name. Give it something that makes sense to what you are working with.

    Now it will ask you for the VM names.

    You can select what schedule fits you best in the setup. I set this up to run every day as an example. The 10 minutes in the screen show is to show you the schedule options.

    Now you need to add the Remote site (NTX-Cluster-03) to which you want it to send. You will do this on the source cluster and add the Remote location to the source cluster.

    NTX-Cluster-01 >>>>>>> NTX-Cluster-03

    Once both sites are set up to talk to each other, we will need to come back here.

    Go to the remote Site Cluster and Log in.

    Go to the Data Protection section.

    Then create the new remote site connection.

    NTX-Cluster-013 >>>>>>> NTX-Cluster-01 ( this is for reverse sync) Basically to replicate it back if needed. In my case, it’s not. But in a Prod setup, you would want to send it back once your DR failover activities are completed.

    Again once we get the sites talking I will come back here to update the mappings.

    Now to check the connections from NTX-Cluster-01 >>>>>>> NTX-Cluster-03

    Now, as you can see, 03 is talking to 01

    Then you can see that 01 is talking to 03

    NTX-Cluster-03 >>>>>>> NTX-Cluster-01

    Now let’s set the mappings. We can start on NTX-Cluster-01.

    Go back to the Remote Site and edit “Update” the Remote site settings.

    Click on Settings, and scroll down.

    Add the Network Mappings and vStore Name mappings. This is just setting up the Source and destination network and storage.

    On Cluster-01

    So AHV: Data-Cluster01 will send to AHV: Data-Cluster-03

    My network name is the same as the test sites. This would be different in your environment based on what you have the Network name when creating a base VLAN. I used the default storage location as well on both Clusters.

    NTX-Cluster-01

    Go back to the Remote Site (NTX-Cluster-03)  and do the same but in reverse.

    On NTX-Cluster-03

    So AHV: Data-Cluster03 will send to AHV: Data-Cluster-01

    My network name is the same as the test sites.

    Save settings

    NTX-Cluster-03 updated with the remote site info.

    Replication has started

    On the remote tab, you can see the data completed along with start times, and then it shows outgoing.

    Now, if I log into my “remote site” NTX-Cluster-03

    I should see incoming and some stats. You can see it listing NTX-Cluster-01 as the remote site. This is because we are logged into NTX-Cluster-03.

    Now it’s done:

    I would like to migrate the server from NTX-Cluster-01 to NTX-Cluster-03.

    Go to the source Prism Element, and click on the Async DR tab. Then click Migrate option.

    Now click on Migrate.

    As you can see, it’s gone from the Prod location

    Check the Remote location or DR NTX-Cluster-03, and you will see it now.

    Now let’s power it one.

    Up and online

    Now, in this case,  I am going to migrate it back.

    NTX-Cluster-03>>>>>>>NTX-Cluster-01

    Now it will, Snapshot the VM and send it back to NTX-Cluster-01.

    Now let’s check NTX-Cluster-01 and it’s back.

    In the next example, I need to move most of my lab VMs from NXT-Cluster-01 to NXT-Cluster-03 to free up resources on a host.

    I selected more this time.

    This time around, I will let the schedule do what it needs to do. This will give you an idea of how it works.

    Starts at 5 pm, and now it’s 4:53 pm.

    At 5 pm we will see a local snapshot start, then it will start replicating to NTX-Cluster-03.

    Snapshot Started

    Replication started.

    Finished

    Now to fail them over.

    Click on Async DR,  The Protection Domain name, Entities.

    Click Migrate

    They’re gone and now on NTX-Cluster-03.

    I am powering them up now.

    They are all online.

    Working

    I hope you find this helpful if you want to play around with Protection domains. As I stated above, this is in my lab.  But I have used this in a production environment to move workloads around. I mainly used this to move resources from one cluster to another in preparation for a data center migration involving a CVAD setup. It worked like a charm and saved me big time in areas where this was needed.

    See more posts from Ray Davis.

  • How to Add a Domain Based SSL Cert to Nutanix PE

    by Ray Davis, CTA

    One of the things I’ve needed to do was to replace the *.Nutanix.local Self Signed SSL Cert on Prism Element. I used many Nutanix articles to do this in the beginning, however, above testing in Chrome and Edge. These browsers didn’t like this SSL cert. I would go through one article at a time, getting the certs updated, and the chromium-based browser didn’t like it. I spent a good bit of hours figuring out what I did incorrectly.

    One of the many things in our line of work is security, and having a self-signed cert may work in some places. But, it’s best to replace them with your internal CA. Citrix doesn’t really need the CA cert to make the hypervisor connection from my experience. Because of the Nutanix Plugins, things work well. However, why not just do it anyway to avoid having to do it later down the road? I do it anyway, to make sure things are 100%.

    I went to another article and the same results. I ended up contacting support on this, and they explained that it needed a SAN. As it turns out, I needed to add the names in my SAN file and not use the common name the way I was using it. 99.99% of the time, Nutanix articles are spot-on-point. But in my opinion, they struggled a bit around this topic. As I worked with the support engineer on the phone, we built a document explaining all this and outlining the steps below. The gentleman asked me to use this document to make a new KB in the portal, which I don’t mind. After all, sharing is caring. I used a wildcard in the common area when generating the Key file (server.key) with the csr file. Now I am not sure why it would not work, but the SAN route honored it, and I found some other articles that talk about using chromium-based browsers that need the names in the SAN location in the certification.

    Most of you understand what a SAN is in a cert realm. It’s just DNS names listed in the cert that, when called upon, it can see that the name is good and, therefore, you won’t have an SSL not trusted error. Below is a breakdown of how I replaced my self-signed SSL cert with an MS CA cert with a wildcard in the SAN. I have been using the Nutanix AHV/Prism/Files for over a year at this point in my career. I have learned a lot about this Hypervisor and how things revolve around Nutanix AHV and the whole product line. Things are smooth in my experience, but I still have a lot to learn compared to my 12+years with VMware. Let’s get started below! 

    1. You will need to create san.conf file

    1. Use vi to create the file:
    2. Copy the text for the san.conf
    3. vi san.conf
    4. type “i” to insert
    5. paste the text below by clicking the right mouse button
    6. press “esc”
    7. :wq!

    2. Check the file with:

    1. cat san.conf

    3. SAN File Output below:

    [ req ]
    
    default_bits       = 2048
    
    default_keyfile    = server.key
    
    distinguished_name = req_distinguished_name
    
    req_extensions     = req_ext
    
     
    
    [ req_distinguished_name ]
    
    countryName                 = US
    
    countryName_default         = US
    
    stateOrProvinceName         = FL
    
    stateOrProvinceName_default = FL
    
    localityName                = Jax
    
    localityName_default        = Jax
    
    organizationName            = RaysLab
    
    organizationName_default    = VyStar
    
    commonName                  = ntxcls.lab.local
    
    commonName_default          = ntxcls.lab.local
    
    commonName_max              = 64
    
     
    
    [ req_ext ]
    
    subjectAltName = @alt_names
    
     
    
    [alt_names]
    
    DNS.0   = pe01.lab.local
    
    DNS.1   = *. lab.local

    4. Run the following commands:

    1. openssl genrsa -out server.key 2048
    2. openssl req -new -nodes -sha256 -config san.conf -out server.csr
    3. openssl req -in server.csr -noout -text  (You don’t need this command, this is a way to open up the csr file to copy the contents from it.) 
    4. Winscp the CSR file to the laptop to a folder location.

    5. Winscp the CSR file to the laptop to a folder location.
    6. Open your MS CA location
     a. https://myca01.lab.local/certsrv


    a. Paste CSR file that you generated from 2b.
    b. The certificate template needs to be “Web Server.”

    d. You have to select “Base 64” always
    e. Download the certificate, and save the file as prism.pem
    f. Download the Certificate Chain

    g. Example:

    7. For the certnew.p7b, which is the chain, you will need to open it.

    a. It will open up the cert in MMC

    b. Right-click and export it.

    c. Select “Base-64 encoded X.509” option.

    d. Save the file as ca.pem.

    e. Save.

    d. Next

    g. Finish.

    8. Go to PRISM Element, Click the Gear in the Top Right.
    a. Select SSL certificate.

    b. Select import Key and Certificate.

    c. Private Key = server.key  ( that you export from #2a. above)
    d. Public Certificate = prism.pem ( that you downloaded from MS Cert Auth and name prism.pen #4e)
    e. CA Certificate/Chain = ca.pem ( that exported form the p7b and name ca.pem #5)

    f. Import files.

    g. Prism Element is good with Chromium-based browsers now.

    That concludes this article. Thank you.​​

  • FSLogix and the ConcurrentUserSessions ​Controversy

    Or why Microsoft killed the ​​ConcurrentUserSessions setting ​​in the new FSLogix versions ​​and why you (probably) never needed it anyway…

    By Mike Streetz, CTP

    Often I would see this set as part of an FSLogix deployment under the misapprehension that it is required on Server OS to make multiple sessions work. It’s not required. It never was.

    This setting doesn’t exist any more. Let’s talk about what it said originally and why Microsoft changed it.

    Originally it said:

    Enable FSLogix to handle concurrent user sessions on the same machine.

    Note: If you are using the Windows server feature to allow concurrent logins for the same Windows account on the same server (seen most often with Citrix XenApp), you must enable this policy.

    Most people seemed to have glossed over the part where it says “for the same windows account.”

    The ONLY time you want to use this is if you’re sharing a generic account and everyone needs to log in to the same machine simultaneously with an individual desktop session.

    This is a bit of an edge case but most commonly seen in manufacturing and sometimes healthcare. It’s also sometimes in play with public kiosks.

    You can’t even do this by default in Citrix! You can’t have multiple desktop sessions on the same machine with the same user because by default you’ll get sent to your already established session on that machine, so you need to override many settings to even make this work.

    FSingleSessionPerUser set to 0 is needed on the RDS side of things. This allows multiple sessions from the same user on the same machine. The default is that every user is limited to a single session.

    On the Citrix side, you need to set session reconnection to disconnected sessions only and same endpoint only, otherwise it’ll assume you’re roaming the session and all new connections will go to the existing open session.

    The wording around ​​ConcurrentUserSessions has since been removed and so has the setting.

    Hopefully this will stop people trying to configure this setting, but because the screenshots for the ADMX templates in the documentation still show this setting, people still ask about it.

    https://docs.microsoft.com/en-us/fslogix/use-group-policy-templates-ht

    It also gets mentioned indirectly in the What’s New notes for the latest version:

    https://docs.microsoft.com/en-us/fslogix/whats-new



    99.9% of people never needed this setting and so Microsoft removed it from FSLogix and it now just checks if you’ve got FSingleSessionPerUser set to 0.

    If you’re one of the 0.1% that need this, please let us know why! 



    Mike Streetz, CTP, Los Angeles CUGC Leader

  • Replicating FSLogix VHDX with Bvckup2

    by Ray Davis, CTA

    One of the things I needed was a fast, quick, down-and-dirty secondary FSLogix profile server. At the time, the business required me to deploy FSlogix ASAP, so I only built one server to satisfy what the company wanted. Now, ideally, you would like to do some HA in your profile solution, architect this out, and put a lot more time into this solution, as this wasn’t an option for me during that time.

    Please note my statement that “One of the things I needed was a fast, quick, down-and-dirty secondary FSLogix profile server” isn’t saying Bvckup2 is a cheap solution. It’s the best solution, in my humble opinion. I discussed this with James Kindon on this blog many moons ago, when James went into this HA difference in an older blog of his. I am not going to repeat what he said. The URL below will help show some differences and options for you. I have been writing up the blog for a while now, and I’ll go into how I did it. You can read all the details in settings and more about what the options do in another blog James Kindon released.

    I am referencing this link for a more in-depth configuration and explanation of the history and back story: “High Level for HA configuration types”

    https://jkindon.com/2019/08/26/architecting-for-fslogix-containers-high-availability/


    Bvckup2 settings (Great break down)

    FSLogix Container Replication with Bvckup2 (jkindon.com)

    Bvckup2 could achieve this for me, and it is something I highly recommend for FSLogix HA solutions. It handles the block level and even opens files. It is nice to have the VHDX file in a RW mode when using FSlogix.

    As you can see here, there are a couple of blogs on FSlogix HA, and users spread around HA, or even putting users on specific Profile servers:

    QuickPost – using FSLogix object-specific settings – JAMES-RANKIN.COM

    Designing Profile Management with Active-Active Resource Locations – James Kindon (jkindon.com)

    I borrowed this screenshot from Jame Kindon’s article. (Thanks, James)

    “Multiple SMB Locations with Multiple VHD Paths – Choosing to use VHDLocations rather than Cloud Cache does not mean that the ability to define multiple locations is lost. FSLogix allows for multiple paths to be defined to allow for Failover should one location be unavailable. The priority for which location will be used first is defined by the order that the paths are specified in the VHDLocations path.”


    For this use case, it’s a straightforward setup:

    1. I created my shares identical on each server.

    2. Server1 (FS01) already had my data in it, it was already in production, so I needed the data replicated to Server2 (FS02) in my remote location.

    3. I Installed the software on both servers. The Source is FS01, and the destination is FS01. They are in different regions as well (different states).

    4. To make it run continuously, you have to run it as a service account, and you will need grant permission. I created a domain account and added the permissions to the software to control the service.


    5. Permission is granted in the location of the folder.

    6. Here are some base settings I set. The Source Is the FS01, and it’s replicating the backup to its FS02 server.

    7. Let’s start the job and watch the data pour in on the FS02 server.

    8. See the data coming in and matching.

                       FS02                                                                                        FS01

    9. As you can see, the data is the same.

    10. In this example, I am making a user profile that is in use and performing an ideal production use case.

    11. Before on FS01

    13. FS02 now matches with what FS01 has.

    14. Now, what if the data changes on FS01 or when the user logs off the RW disk are gone from FS01? The detect changes will clean this up and make the profiles match the source and destination locations.

    15. Once I was logged off from my main profile on FS01, BCKUP2 detected data was a change from the Source FS01. It compared the replicated VHDX only. Then, on FS02, it took the RW disk and created a folder under my main profile folder called $Archive (Bvckup 2), then dumped the needed data with a recent replication. In this case, I used the Archive feature just in case something was screwed up, and I could go back to the point in time for the user’s container.

    16. What if an outage occurred on FS01 and FS02 gets turned into the primary, and the detect changes can’t happen because there is nothing to detect from the Source? I am not 100% on this yet. I can only assume. In this case, FSLogix will do what it does with any RW disk and create a RO option to get the profile going. ( I need to test this and create an outage and compare results)
     

    These options are what achieved that for me:

    1. Re-Scan destination, archive backup copies of deleted items, and use delta copying.

    Here are the high-level steps it does:

    1. It will detect when a user logs off.
    2. FSLogix natively takes the RW file and merges it back into the FS01 base VHDX.
    3. Bvckup2 sees this change and adjusts it on the destination to match. Which is an extra backup of the old file.

    Let’s test this. As you see below, two RW files in the location of FS02. The reason for this is that I have a session in Site 1 ongoing, and it copied it as it should.

    At this time, I logged off Site 1. You will see only a test. VHDX here on FS01 because of the VHDX Merge. (I am using Profile type 3).

    Let’s replicate the job now, being the RW. VHDX has gone from the Source FS01 Server

           a. Me replicating it


    This is a quick blog showing how it works, and it is excellent software for replicating containers. There are many ways to set this up and achieve the desired replication for you in your environment. Feel free to ask me any questions. I may not be able to answer them all. But I can test the use case in my lab and try to help.
    ​​

  • A Warm CUGC Welcome to the CTP and CTA Classes of 2022!

    by Kimberly Ruggero, Citrix

    Please extend a warm CUGC welcome to the newest additions to the CTP and CTA Classes of 2022! 


    👏 Congratulations and welcome to the newest members of the Citrix Technology Professional (CTP) and Citrix Technology Advocate (CTA) classes of 2022! And, of course, we are so happy to see all of the returning members in each program!

    Each year, Citrix awards Citrix technologists hailing from all over the globe for these two distinct programs. These seasoned techies bring a wealth of knowledge of Citrix solutions as well as active engagement in their local Citrix communities by continually sharing knowledge and advocating on behalf of Citrix.

    This year, we’re excited to see many CUGC community members make the lists:

    CTP Adam Clark

    CTP Mike Streetz (LA CUGC leader)

    CTP Rory Monaghan

    CTA Aavisek Choudhury

    CTA Andrew Taylor (Vancouver CUGC leader)

    CTA Bjoern Mueller

    CTA David Gautney

    CTA David Salvatore (Swiss CUGC leader)

    CTA Henry Heres

    CTA Javier Lopez Santacruz

    CTA Jeremy Saunders

    CTA Julian Jakob

    CTA Kris Davis

    CTA Mani Kumar (Bay Area CUGC leader)

    CTA Micheál O’Dea

    CTA Mick Hilhorst

    CTA Steven Lemonier (Swiss CUGC leader)

    CTA Tiffanny Renrick (Columbia CUGC leader, CUGC Women In Tech Mentor)

    CTA Tom de Jong

    We’re so excited to see our CUGC community members shine! To learn more about the new inductees into the CTP and CTA classes, please see the following Citrix Blogs:


    Welcome to the 2022 Class of CTPs!

    Announcing the newest CTAs!


    Here’s to another great year of contributions to the CUGC community from our CTPs and CTAs! 🏆

  • How to Prune a FSLogix Profile

    by Ray Davis, CTA

    Last year, I put out a blog on FSLogix’s quick Default Exclusions explanation (see FSLogix Default Exclusions Explanation and Quick Fix (mycugc.org).

    That solution is working very well. But anything that remains in the FSLogix Profile before I reversed the GPO to purge the %Temp% and INetcache doesn’t purge from the user’s profile. I needed a quick solution to achieve this, and I recall reading about this somewhere, but I couldn’t remember where. Later that night, I went to bed with it heavy on my mind, and while I was sleeping, around 3 am or so, I rose out of bed remembering reading Aaron Parker’s FSLogix blogs a couple of years ago. At this time, I hopped on the computer and did a quick google search: “How to purge Fslogix Profiles.” Sure enough, the name “Steal Puppy” stuck out. Most often, it’s the usual way of life for me. I am always thinking of what I don’t know, how I can do whatever I am dealing with, and even how to get to the next level. Sadly, it consumes me mentally, yet I always find a solution from other talented people in the industry. Ok, let’s get back on track. Here is the article I was referencing:

    A Practical Guide to FSLogix Containers Capacity Planning and Maintenance (stealthpuppy.com)

    Please understand that I copied the pruned material and put it here to help you understand what I am talking about in my references. 

    Prune the Profile
    Not all profile locations are candidates for adding to the redirections.xml and excluding from the Profile Container. Consider history and cookie folders that would negatively impact user experience if they were not maintained across sessions. In this case, we can run regular maintenance on additional folder locations inside the profile to keep the size in check. This approach won’t directly reduce the size of the profile per se, but will assist in containing growth.

    I’ve written a PowerShell script – Remove-ProfileData.ps1, that can prune or delete a set of target files and folders. The script reads an XML file that defines a list of files and folders to remove from the profile.”

    “Actions on a target path can be:

    • “Prune – the XML can include a number that defines the age in days for last write that the file must be older than to be deleted”
    • “Delete – the target directory will be deleted. The Delete action will delete the entire folder and sub-folders”
    • “Trim – where the target path contains sub-folders, this action will remove all sub-folders except for the most recent. This approach is implemented to clean up applications such as GoToMeeting that can store multiple versions in the profile”

    The script supports -WhatIf and -Verbose output and returns a list of files removed from the profile. Add -Verbose will output the total size of files removed from the user profile and processing time at the end of the script. All targets (files / folders) that are deleted, will be logged to a file. Deleting files from the profile can result in data loss, so testing is advised and the use of -Confirm:$false is required for the script perform a delete. To prune the profile, run the script as a logoff action.

    A word of caution – this script is unsupported. If you would like to help improve the script, pull requests are welcome.

    In my testing, I needed to prune a section of the profile. I have an application that uses IE (soon Chrome Edge), but I need to get all this trash out for now. It seems IE is cache-heavy for the web-based application. But the IE cache isn’t just for IE. Other trash goes into here, and if I prune it, the area is empty.

    As you can see below,  the INetcache folder is dumping it to the local_username. But it’s not getting the old stuff because the GPO was updated and put in after I realized it wasn’t working in my previous blog. Anything before the GPO was updated does not purge the data.

    As you can see here in my example, the username_local is moving it to the scratch area. But notice the date on the folders. It’s moving new stuff only!

    \\VDI\c$\Users\local_Test01\INetCache

    Perhaps this may still be a factor for the IE app because it shares the INetcache. But look at the actual location of what isn’t redirected. You can see that the old stuff is still lingering around. This means that IE is looking at the INetCache for the overall “Cache storage.” Some Web applications like the one I have in this environment, once the overall Cache Storage location is used up. Well, the application becomes extremely slow.

    \\VDI\c$\Users\Test01\AppData\Local\Microsoft\Windows\INetCache

    To ensure this location is purged, we can schedule the “Remote-Profile.ps1 script to run and take care of this. The script needs a Target file. The Target file is called with the parameters to understand what needs to remove from the FSLogix profile.

    A reminder of how it works

    A PowerShell script – Remove-ProfileData.ps1, that can prune or delete a set of target files and folders. The script reads an XML file that defines a list of files and folders to remove from the profile.”

    Before

    The screenshot shows the old data.

    After

    I remoted to a VDI machine, and then I executed the script.

    C:\windows\system32\WindowsPowerShell\v1.0\powershell.exe -File “\\Domain\NETLOGON\CitrixScripts\WemExternalTask\ProfilePrune\Remove-ProfileData2.ps1” -Targets “\\Domain\NETLOGON\CitrixScripts\WemExternalTask\ProfilePrune\targets.xml” -LogPath \\Domain\NETLOGON\CitrixScripts\WemExternalTask\ProfilePrune\logs -verbose


    Location of InetCache throug UNC

    \\VDI69\c$\Users\Test01\AppData\Local\Microsoft\Windows\INetCache

    It looks like this has done the job for me, and there are many ways to implement this. I don’t advise you to do it at logon, though, as it would impact the logon performance. You can do a Logoff script, scheduled task, or use a UEM tool.

    I am a big WEM fan myself. I don’t use anything else at the moment. That alone is a whole other topic 😊

    Open the WEM Admin Console>External Task

    Click Add at the bottom, and fill in the data to the script location.

    It is calling PowerShell then the argument is to the location where the meat and potatoes reside.

    Argument

    -Executionpolicy bypass -File “\\ sillyrabbit.org\\NETLOGON\CitrixScripts\WemExternalTask\ProfilePrune\Remove-ProfileData2.ps1” -Targets “\\sillyrabbit.org\NETLOGON\CitrixScripts\WemExternalTask\ProfilePrune\targets.xml” -LogPath \\ sillyrabbit.org\\NETLOGON\CitrixScripts\WemExternalTask\ProfilePrune\logs -verbose

    Now, Click Assignments>add to the desired assignment.

    Select what Filter you have. I keep it elementary most times and do “Always True.”

    Now it’s ready to go. Of course, you can refresh the WEM agent and force it.

  • Windows Build Automation with Packer, PowerShell 2022 Redux

    by Owen Reynolds, CTA

    The related presentation recording is here: User Share: Windows Build Automation w/Packer, PowerShell 2022 Redux – CUGC (mycugc.org)

    Last year, I wrote a long post about using Packer.IO to automate basic VMware shell creation and Win 10 / Win 2019 installation. At that time, I only ended up using the solution for re-builds on my own home lab. This year, I’ve had the need to build golden images for multiple clients, each time, the process was manual and error prone as no automation was used.

    In the last week of Nov 2021, I decided to sit down and re-visit my Packer / PowerShell windows build templates.

    I’m very happy to share that I’ve got automation in place to deliver a fully built / base optimized / bi-lingual (En/Fr) / Windows patched Windows EFI image in approximately 25 minutes. Last year, the builds were about 10 mins, but didn’t do HAVE of what I have now. Let’s get into it!

    For reference, here was the blog post from last year on packer / windows build automation for VMware environments

    If you’ve not read it, give it a read, as I won’t be re-reviewing most of the stuff I did in the above post (which was LOOOONG). This new blog post is about the new PowerShell code I wrote to achieve a better level of automation.

    The Goal

    Building golden images for windows is a bit of a mug’s game. Like anything in the mostly unregulated world of IT, there’s not really an agreed upon standard.

    The scripts / config files are on my GitHub here

    To start, you will deploy windows with an autounattend.xml. Autounattend.xml files have been around for a while, and you can use them with Packer or MDT or SCCM, or other. The idea is to deliver a Windows build with no prompts. The full structure of the autounattend.xml file is described in my blog post from last year.

    Setup

    On your Windows machine create a directory structure as shown below, starting with c:\Program Files\Packer

    Within the config sub-folder create two folders, one called JSON, one called autounattend

    Files will be downloaded to each in further steps.

    Download / extract Packer from packer.io to the c:\Program Files\Packer folder you created.

    Set a system environment variable to c:\Program Files\Packer

     

     

    The config files

    As above, you can review my blog post from last year if you want a full primer on Packer / JSON / autounattend.xml usage.

    For this blog post, download the required Win 2022 JSON / XML templates from my github here:

    https://github.com/getvpro/Build-Packer/tree/master/Config/JSON

    https://github.com/getvpro/Build-Packer/tree/master/Config/Autounattend

    Start with the JSON file, open it using a text editor (like Notepad3).

    Edit line 30 to cover the path to where you’ve got an up-to-date VMwareTools.iso

    Edit line 50 to and choose a unique local admin user that will be used for the build process, you can delete it when done, or rename it, but it needs to be the same as is set in the autounattend.xml file you will be editing next.

    Edit line 66-79 for your environment.

    Next, open the Autounattend.xml file.

    CTRL + H to do a search/replace for anything labelled as “CHANGEME”, amend as required for your environment, ensuring you’ve set the username / password the same as the JSON you just edited.

    Line 102 can be edited for your local time zone as well.

    Line 103 can be edited to amend your preferred computer name.

    The scripts

    https://github.com/getvpro/Build-Packer/tree/master/Scripts

    There are 4 scripts that are called as first logon activities from the autounattend.xml file.

    You will need to download them each to the c:\Program Files\Packer\Scripts folder created earlier.

    Lastly, download the Packer start build script, essentially a wrapper that starts packer/then uses PowerCLI to connect to vCenter / start the VM once Packer has done its initial provisioning.

    https://github.com/getvpro/Build-Packer/blob/master/Scripts/Start-PackerBuild-Win2022.ps1

    You will need to install VMWare PowerCLI to use the script.

    Open PowerShell as an admin, and run

    Install-Module -Name VMware.PowerCLI -AllowClobber -force

    ..if you get a Nuget package manager can’t be found BS error, run the following:

    [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

    Then the above again.

    With the file’s downloaded, the json/autounattend.xml edited for your environment, you’re ready to start the build.

    Build process

    Launch the Start-PackerBuild-Win2022.ps1, it will ask for the VM name you set in the JSON file for your new Packer build VM, your vCenter name and credentials to connect to it. Record relevant vCenter info to be used to power on the VM once packer has done it’s first tasks.

    The Start-FirstSteps.ps1 script contains the most important steps, as it downloads 2 more scripts / scheduled tasks from my github.

    The actual order of execution of all these scripts is shown in the following screenshot:

    I live in Montreal, Quebec. A lot of businesses need to have both of Canada’s official languages installed on their systems to maintain compliance with these folks https://www.oqlf.gouv.qc.ca/accueil.aspx. These lads can levy fines of up to $7000 Cdn for not having Fr-Ca support on a computer system. This includes Fr-CA physical keyboards. With this build automation, my end of it is the Fr-Ca language pack.

    My French speaking/listening skills aren’t great, but I can read it well enough, and can certainly navigate windows when it’s running in Fr-CA, enough to automate and configure!

    The Start-FirstSteps.ps1 script automates the entire process of download the language cabs for Fr-CA from my github, installing it via Add-WindowsPackage, as well as the last part, to actually add Fr-CA as a display language you can see from the notification area of the task bar. Of the new code I added to the Start-FirstSteps.ps1 script from last year to now, this part took quite a while. Lang packs are per OS, so you need to DL the correct one for your exact OS build: Win 10 1909 / 20H1 / 21H1 / Server 2016 / Server 2019 etc.

    As I’m using Server 2022, Microsoft doesn’t include the various language packs on the ISO, it’s a separate download that isn’t titled as Microsoft says it should be. I got it via my.visualstudio.com

    I’m probably breaking some kind of law by hosting the Fr-CA .cab on my github, but I don’t care, come at me, MS bros 🙂

    The script downloads the multi-part zip and extracts it, but if you want it, the link to the Fr-CA cab is here.

    If you want to be MORE complaint and have access via your employer to my.visualstudio.com, you should download the entire pack and adjust the lines in the script that download the Fr-Ca.cab

    If you don’t need Fr-Ca support in your image, open the related example Autounattend.xml from my github, and search for the following:

    CMD /c reg.exe ADD “HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment” /v FrenchCaLangPack /t REG_SZ /d 1 /f

    Change the /D 1 to /D 0

    Once the Fr-Ca language pack is installed, the machine will reboot, and start processing two custom scheduled tasks. The first scheduled task will start Windows update, the second will monitor its progress. Of all the new code I’ve this Packer v2 project, this piece was the most challenging and frustrating. For about 3 years, I’ve been happy using the PowerShell module PSWindowsUpdate to automate installation of Windows updates on my lab, and for some clients. However, during regression testing of my Packer v2 build, I was NOT able to get PSWindowsUpdate to successfully apply windows updates 100% of the time. It worked well, but would get stuck during the download process about 50% of the time, which is not usable for an automated build. I tried all kinds of work-arounds, but could never do better than 50%. As such, I decided to seek out an alternative method, and found one that’s actually built into windows! It’s an exe I’d never heard of before called UsoClient.exe.

    The exact line I’m using is UsoClient.exe StartInteractiveScan

    This opens the familiar Windows update UI we use to patch Windows interactively, however, with the “StartInteractiveScan” option selected a scan is done, and patches start applying right away.

    The imported scheduled tasks run in the system context, the default behavior for running a PS script via a scheduled task in the system context means the logged-in user won’t see the output of the script or even the Windows update UI, which isn’t ideal. So, to present this info to the logged-in user, my co-worker Jon Pitre recommended I have a look at an SCCM component called ServiceUI.exe. Launching PowerShell via this exe will show the output to the current user, neat! In this way, the various stages of the Packer build can be shown to the user. There are several auto logons set for the build to cover the reboots after Windows updates have applied. Once it’s determined by the Monitor-WinUpdates.ps1 script that there are no more updates to apply, the related scheduled tasks will be disabled, and final window to the user will be shown that indicates the total build time, and that you’re ready to join the machine to your AD domain or run whatever post base build actions you want to for your client environment.

    With your Windows build done, it’s time to install some apps! For that, you’ll want to switch over to my good friend/fellow Canadian CTA/co-worker Jonathan Pitre’s git hub, HERE.

    The Packer build installs the pre-reqs for most of Jon’s app install scripts: nuget, Winget, Powershell application deploy toolkit, evergreen and more, so, you would just need to choose the apps you need and let it rip!

    I will update this blog post once more when I’ve got my Packer JSON config files updated to HCL.

    Have a great day and happy automating 🙂

  • Cheers to 2022 and News from the Steering Committee

    by Steve Elgan, CTA & CUGC Steering Committee

    I hope all my fellow CUGC members are enjoying a healthy and happy start to 2022. The past couple of years may have hampered our ability to gather in person at local and XL events, but I’m proud to say that the CUGC community has held together just as strong as ever, and in some ways, even stronger.

    As we dive into this new year, I’m announcing some changes to CUGC’s Steering Committee. Firstly, I will be stepping down as president, though I will remain on the Steering Committee as a voting member. I am overly pleased to announce that the committee has voted Sarah Vogt as our new president, with Jon Bucud serving as vice president. Both Sarah and Jon have been active, engaged members of the Steering Committee, and the greater community, and I look forward to the year ahead under their leadership.

    Thank you Steve. And thank you for serving us through probably the most interesting times we have faced to date. Your guidance and leadership during such uncertainty helped us redefine the community as a whole.  As you stated, I believe we are stronger today for it.

    I am grateful for the time served with you, all I have learned from you, and believe Jon and I are ready to face whatever challenges 2022 throws our way. 

    To the members of CUGC, we are here for you so please do not hesitate to reach out to your leaders, Citrix representatives, or CUGC Steering Committee members. It is our mission to create pathways to education, networking, and knowledge all while creating strong bonds along the way. Cheers to 2022 CUGC!” –Sarah Vogt

    “As we look forward to what we envision to be another dynamic year in 2022, I owe all the deepest congratulation to our new Steering Committee president, a fellow local leader, and CUGC Women in Tech mentor Sarah Vogt. Together, I believe we will continue to keep CUGC’s meteoric momentum going with the return of in-person events, blogs online, and of course, our jam-packed virtual events. On a personal level, as your new vice president, I hope I can do anything in my power to enable our leaders to succeed going into the new year and beyond.” –Jon Bucud

    Thomas Berger, Citrix
    I’d also like to welcome Citrix Director of Technical Marketing & Competitive Intelligence Thomas Berger to the committee. Thomas is an enthusiastic supporter of the CUGC community, and we are honored to have him join the team.

    I have enjoyed my time as President of the Steering Committee. I never would have expected to serve during a global pandemic. I’m proud of our CUGC community and how we played such an enormous role in keeping our world working remotely during COVID19. I’m thankful this community exists and was a resource to help you, and help your organizations.

     🍻Here’s to another great year at CUGC! Stay tuned for more details on our upcoming XL events, 2022 t-shirt, CUGCY awards, and so much more!

    Sincerely,
    Steve Elgan

  • What is Citrix Secure Internet Access?

    by Mani Kumar, CTA & Bay Area CUGC Leader

    Citrix Secure Internet Access (CSIA) is a cloud-based solution that enables secure remote access to online and SaaS applications. It includes a secure web gateway, a cloud access security broker, malware protection with sandboxing, intrusion detection and prevention systems, and data loss prevention. Along with SDWAN and Secure Workspace Access, Citrix Secure Internet Access is a cornerstone of Citrix’s fully integrated Secure Access Service Edge (SASE) solution.

    Citrix Secure Internet Access allows safe access to online and SaaS applications both within and outside the Citrix Workspace, regardless of the user’s location. It adds an additional layer of protection to Citrix Workspace users and integrates with Citrix SDWAN to provide a fully integrated Citrix network and security solution.

    Features and benefits of Citrix Secure Internet Access:

    Citrix Secure Internet Access facilitates in the centralized management of Citrix Cloud-based services. The capabilities and benefits of Citrix Secure Internet Access are summarized as below:

    • 1. Unified management:
      • Comprehensive security capabilities with a holistic view and granular control. This is all available on a single platform, together with analytics for detecting security incidents, out of the ordinary behavior, reported risks, productivity loss, and policy breaches.
      • Users that have access to both SDWAN and Citrix Secure Internet Access can manage both services from the same interface. As a result, all traffic and users are safeguarded using a platform that combines network and security designs.
    • 2. Efficiency:
      • Citrix SDWAN and Citrix Secure Internet Access implementation is simple and quick, with automatic configuration.
      • High-performance design with cloud-like scalability.
      • For best speed, a single pass architecture is used, in which communication is decrypted once and all security measures are executed before being re-encrypted.
      • SDWAN reduces latency by automatically selecting the closest Citrix Secure Internet Access gateway node.
    • 3. Reliable performance:
      • Updates are delivered automatically to ensure that you have the most up-to-date protection against security risks.
      • Backup connections for dual resiliency.
      • Because of the single, unified view, IT can troubleshoot issues more quickly.
    • 4. Privacy:
      • In the Citrix Secure Internet Access service, each customer’s data is processed through distinct gateways and separated by enterprise. Data is reviewed and logged locally to ensure GDPR compliance.
    • 5. Better remote working user experience:
      • Moving network security to the cloud, where the resources that users need are already available, brings security closer to the users. Citrix Secure Internet Access has over 100 points of presence (PoP) around the world.

    How Citrix Secure Internet Access works:
    One of the following ways may be used by your users to access unapproved web and SaaS applications:

    • Utilizing Citrix Workspace to create virtual desktops
    • From a branch or home office
    • Remotely from local host systems

    Regardless of the user’s mode of direct internet connection, traffic is diverted through Citrix Secure Internet Access.

    The three key use cases represented in the preceding image describe how the process works.

    1. Citrix Virtual Apps and Desktops: Remote users may safely access unauthorized web and SaaS applications with Citrix Virtual Apps and Desktops. Install a CSIA Cloud Connector agent on the Virtual Delivery Agent to reroute internet traffic (VDA).
    2. Native browsers on host systems:Remote users may safely access unapproved software on their local systems (laptops, phones) (managed or unmanaged). Install CSIA Cloud Connector agents to encrypt internet traffic on these devices. The Cloud Connector agent authenticates users and installs SSL certificates. The Cloud Connector has agents for iOS, macOS, Android, Windows, and Linux.
    3. Branch offices: On-premises users may securely access online and SaaS programs by routing traffic to Citrix Secure Internet Access. IPSEC or GRE tunnels are used to do this without a Cloud Connector agent. Assembles secure connection to the closest Citrix Secure Internet Access point of presence (PoP). Traffic is tunneled using IPsec or GRE. Multiple connections to main and secondary Citrix Secure Internet Access PoPs provide redundancy.


    Licensing:
    Citrix Secure Internet Access (CSIA) is available in three editions:

    Standard: A cloud-based security system with centralized management. CASB, SSL traffic management, and web content screening are important security elements.

    Advanced: Complete security solution includes malware detection, command and control callback detection, and incident response.

    Premium: This complete security solution includes superior sensitive content detection and analysis.

    I’ve set up a laboratory environment to demonstrate how to configures Citrix Secure Internet Access.

    Step 1: Log into Secure Internet Access
    Log into Citrix Cloud (https://citrix.cloud.com) and click on CSIA Admin UX account within the customer list.

    • Ensure the OrgID in the top right matches the Network OrgID on the left side. If that is different, select Change Role.
    • Select the Configuration tab
    • Select the Open Citrix SIA Configuration button

    Step 2: Home -> Node Collection Management -> Node Groups

    • Make sure you have at least one Gateway Node Cluster
    • Document this hostname and IP Address for later validation as it should appear in your PAC file.

    Step 3: Configuring PAC Settings

    • Click on Edit Default Zone

    Step 5: Update Default Zone Dialog -> PAC Settings
    Use the “Add a Function” to add a “Domain and Sub-domain List” function containing Citrix Cloud domains:

    • com
    • com
    • net
    • net
    • net

     Any traffic destined for these URLs will not traverse the SIA service.

    Step 6: Web Security

    • This is the main place where we set actions on web categories.
    • Notice the Group Selector at the top, if you wanted to apply different settings to different groups.
    • (Optional) Turn on Enable ID Theft / IP Address URL Blocking. This will block IP addresses to be used to access a website.
    • Turn on Enable HTTP Scanning on non-standard ports.

    Step 7: Creating Allow List for website:

    • We want to allow poker.com to be visited, but continue to disallow other gambling sites.
    • Scrape the poker.com site to determine what other sites need to be allowed to have poker.com function
    • Click “Scrape

    Step 8: Allow URL Dependencies:

    • URL to Scrape, enter “com
    • Click “Scan
    • Select all domains
    • Click “Add Selected to Allow List

    Step 9: Enable blocking based on Keywords:

    • Enable Adult and High Risk Pre-defined Keyword Lists
    • Click “Save
    • Add “poncho” to keywords, selecting High Risk and Global prior to clicking “Add

    Step 10: Configure Data Loss Prevention:

    • Enable “Content Analysis & Data Loss Prevention
    • Enable all of the checkboxes (Except “Block on Scan Error” and “PII“)
    • Click “Save

    Step 11: DLP Search Patterns:

    • Click “Create Defaults
    • Click “Create Default Search Patterns

    Step 12: Adding DLP Rules:

    • Click “Add Rule
    • Set Name to “OUT
    • Set Direction to “Out
    • Enable all rules
    • Click “Next
    • Change Inclusion Policy to “Include All, Except Selected Items
    • Click “Next” each time

    Note: Click next till #step-6

    Step 13: Configure Search Criteria:

    • Disable “PII
    • Enable the following Regular Expressions
    • Visa
    • Social Security Numbers
    • Passport Numbers
    • MasterCard
    • MasterCard1
    • Click “Next

    Step 14: Rule Action Dialog Box:

    • Set Action to “Block
    • Click “Save