• March 28, 2024, 07:36:38 AM
  • Welcome, Guest
Please login or register.

Login with username, password and session length
Advanced search  

News:

This Forum Beta is ONLY for registered owners of D-Link products in the USA for which we have created boards at this time.

Author Topic: HYPER-V VM isolation  (Read 6817 times)

MaksimB

  • Level 1 Member
  • *
  • Posts: 1
HYPER-V VM isolation
« on: May 04, 2022, 06:54:18 AM »

Good day, dear forum users!

I have a HYPER-V server with 4 virtual machines. Each of them runs a separate client. For the physical isolation of clients, a simplified scheme "a separate wire to a separate network card" was previously used with the configuration of a separate virtual router for each client. But the number of clients is increasing.

I decided to try to do MAC based physical isolation so that clients can't see each other on the same physical server. The problem is that VLAN on the port does not help, since in one port (on the same cable) there is a HOST server and 4 clients that have different MACs and in the amount of 5 pieces. Isolation at the level of Windows features does not help - you can remove the visibility of other virtual machines, you can remove the ability to browse the network, you can change the WORKGROUP, but the transition to IP 192.168.X.X cannot be removed. That is, there remains the potential for unauthorized access attempts using an available IP. The task is complicated by the fact that the network still has a backup device that all virtual machines should see.

I am not a D-link specialist, although I have some experience in setting up networks. Therefore, I ask for help from the gurus of this forum - what is the best way to implement this? A big request for a more or less step-by-step recommendation for configuring the switch, general recommendations such as "bind the MAC of virtual machines to separate VLANs", or "configure the switching table", or "prohibit the transmission of packets between certain MACs of virtual machines" will not help - the task is just how to do it, and more correctly.

I am sure that this answer will be useful to many D-link-sheep, since the problem is widespread.

From myself I guarantee the final report on the implementation of these recommendations, with screenshots of the D-link WEB-interface settings.

Thank you very much in advance, as the problem is quite complex, and it is definitely necessary to solve it.
Logged

PacketTracer

  • Level 4 Member
  • ****
  • Posts: 441
Re: HYPER-V VM isolation
« Reply #1 on: July 11, 2022, 02:00:56 PM »

Hi,

I'm not a D-Link specialist either but I think I've a good knowledge of networking in general, hence maybe I could suggest a basic idea to solve the problem.

What I have understood so far is:

  • You have a set of N VMs (N>=4) running on top of MS Hyper-V.
  • Those VMs and the Hyper-V host interface shall all be driven within the same IP network.
  • The VMs shall not be able to talk to each other.
  • The VMs shall be able to consume external resources within their IP network and beyond.
  • The solution should be able to scale, e.g. it should not rely on conditions that require a separate network card per VM within the Hyper-V host.

So far, so correct?

If true the demands can eventually be fulfilled by using a (D-Link) switch that supports "asymmetric VLAN". This feature allows you to define so called "group ports" and "shared ports", where in your case each VM would form a group of its own (and hence be isolated) and be connected to a corresponding switch "group" port. Any external device (your backup device or a router for connecting your VMs to the world beyond) would be connected to a corresponding "shared" port each.

Alas, the problem with this idea is, that at a first glance this would require a separate network card (and VLAN) per VM which contradicts your scaling demands. To prevent this you could try the following:

  • Connect your Hyper-V host with a single network card (or perhaps use a NIC team with two cards) to the switch.
  • Configure the switch port (or ports in case of NIC teaming) to be (a) VLAN trunk(-s).
  • Within Hyper-V connect the NIC or NIC team to a single vSwitch, and connect the VMs to that vSwitch using a separate VLAN per VM.
  • Shift the Hyper-V host interface away from the physical Interface(-s) to a logical Interface connected to the vSwitch and assign it another VLAN such as if it were a VM either (hence it is also isolated from the VMs).

Now: For any VLAN (VMs + Hyper-V host interface) you need a pair of physical switch ports that you have to short-circuit by some short ethernet cable (this is the trick that prevents you to use separate network cards within your Hyper-V host):

  • Switch on "asymmetric VLAN" support within your switch.
  • The first port of a short-circuit has to be configured as an access port for the VLAN under consideration (that is: it has to be an untagged member of that VLAN and its PVID must be set to that VLAN ID either).
  • The other port of the short-circuit must be configured as a group port for that VLAN (that is its PVID must be set to the VLAN ID and it must be configured to be simultaneously an untagged member of both that VLAN ID and some other VLAN ID different from all VLANs used for the VMs and the Hyper-V host interface. That separate VLAN-ID is the "shared" VLAN).

The result is that you spread the (N+1) VLANs aggregated to the trunk port (that connects to the Hyper-V host) to (N+1) separate switch "group" ports via the short-circuits which is the same as if you would use (N+1) network cards (one per VLAN used for any VM and the Hyper-V host interface) to connect directly to the corresponding (N+1) switch "group" ports.

Finally you have to define "shared" ports, one per each external device: Define the PVID of a shared port to be the shared VLAN and set the port to be an untagged member of all VLANs (those used for the VMs and the Hyper-V host interface and the shared VLAN itself).

For N VMs, the Hyper-V host interface, M team NICs and K external (shared) devices you'll need 2(N+1)+M+K switch ports.

The price you pay for this solution is the consumption of physical switch ports.

EDIT

Thinking a bit more about what I wrote above, I come to the conclusion that the approach must be modified to prevent loops, that result from sending via the short-circuit the same VLAN from one port to the other. But this can be prevented by selecting another ingress VLAN-ID for (PVID setting of) the group port of a short-circuit under consideration than the VLAN-ID of the access port at the other end.

For example:

Say you use VLAN 100 to 104 for the logical interface of the Hyper-V host (100) and 4 VMs (101-104). The trunk port for connecting the Hyper-V host would encompass VLANs 100-104.

For the shared VLAN let's use VLAN 300.

For the five group ports needed you could use VLANs 200-204, where for any of the five short-circuits you would configure a pairing of VLAN-IDs that transforms VLAN-ID 10x to VLAN-ID 20x, where VLAN 10x is configured for the access port at the one end and VLAN 20x for the group port at the other end of the corresponding short-circuit.

This would result in the following port configuration for the physical switch (where feature "asymmetric vlan" must be switched on):

  • Trunk-Port: Tagged member of VLANs 100,101,102,103,104. PVID: set it to a dummy VLAN not used, say VID 1 (default)
  • 1. short-circuit: Access port: Untagged member of VLAN 100, PVID=100 - Group port: Untagged member of VLANs 200,300, PVID=200.
  • 2. short-circuit: Access port: Untagged member of VLAN 101, PVID=101 - Group port: Untagged member of VLANs 201,300, PVID=201.
  • 3. short-circuit: Access port: Untagged member of VLAN 102, PVID=102 - Group port: Untagged member of VLANs 202,300, PVID=202.
  • 4. short-circuit: Access port: Untagged member of VLAN 103, PVID=103 - Group port: Untagged member of VLANs 203,300, PVID=203.
  • 5. short-circuit: Access port: Untagged member of VLAN 104, PVID=104 - Group port: Untagged member of VLANs 204,300, PVID=204.
  • Any shared port: Untagged member of VLANs 200,201,202,203,204,300, PVID=300.

/EDIT

Another and better (or even the best) solution would be to use/configure an "isolated private VLAN" or for short "isolated pvlan".

BUT:

This only works if both the physical switch in use and the Hyper-V vSwitch support it. As far as I know, the Hyper-V does not support this feature from within the GUI to configure a vSwitch, instead you have to use special powershell commands, to do the job.

EDIT2
Found some hints here:
/EDIT2

To setup an isolated pvlan you would need exactly 2 VLANs: a so called primary VLAN and a secondary or "isolated" VLAN. The relationship between those two VLANs that cooperate to form the isolated PVLAN must be configured in both the vSwitch and the physical switch where both are connected (Hyper-V host NIC) via a VLAN trunk consisting of these two VLANs.

For the vSwitch you would define so called "isolated" ports, that connect the VMs and the (logical) Hyper-V host interface. For the physical switch you would define so called "promiscuous" ports, that connect to the shared devices (backup server, router, ...).


« Last Edit: September 14, 2022, 12:05:27 PM by PacketTracer »
Logged