#2 Essay Site on Sitejabber
info@theunitutor.com
+44 20 8638 6541
  • 中文 (中国)
  • English GB
  • English AU
  • English US
  • العربية (Arabic)

Solving Memory Management Problems

Student’s Name

Institution

Date

SOLVING MEMORY MANAGEMENT PROBLEMS

Abstract

The operation of systemizing computer memory, allocating segments called blocks to miscellaneous running operations to revamp the system’s general operation, is memory management. This process is often regarded as a crucial function in an operating system (OS) that handles basic memory by helping processes move between the execution disk and main memory (Mishra & Kulkarni, 2018. It also helps the OS monitor each memory location, disregarding whether it remains unbound or is apportioned to a process. Implementing memory management techniques helps allocate space to different application routines and ensures that they do not interfere. This assignment explores the techniques such as swapping, memory allocation, and paging.

Swapping

Swapping is a means by which the operation must be interchanged provisionally from the primary memory to the backing repository and later returned for execution. A Backing store is often some ancillary storage device such as a hard disk, spacious to integrate duplicates of all memory images for users (Marsico et al., 2017). Moreover, the backing store should facilitate uninterrupted entry to these memory images. The process affects performance (Marsico et al., 2017). However, it facilitates the parallel running of numerous and substantial operations. The implementation of this technique facilitates minimal wastage of CPU execution time, and thus it may be easily implemented to a priority-based scheduling technique to boost its operations. Consider the following situation: the main memory is full and several processes are waiting in the secondary storage. So, there are two issues: which process should be removed from main memory to create a space and which process should be brought in from secondary memory? In case of first situation, the process which are waiting for slow events are vacated from main memory. And, in case of second situation, the process which is waiting for long time will be brought in from secondary memory. When a CPU is slow, compared to its backing store, swapping makes sense. The CPU can issue one transfer command and the I/O system can move an entire process into or out of main memory. As CPUs get faster, paging makes more sense. The CPU has more time to decide which pages are not being used and to issue transfer requests. Paging generally requires “smarter” hardware, with access bits for each page of memory, or at least invalid page bits. Swapping wastes memory due to internal fragmentation. Even on paging systems, swapping is useful when thrashing is occurring due to too many active processes touching too many pages. Mobile Operating Systems do no support swapping .The devise that use Mobile Operating systems like Phones, Tablets etc. use flash memory that too with limited capacity. So there won’t be enough space for swapping. Hence they do not support swapping. Also the number of writes supported by a flash memory is limited. So swapping makes it unreliable as swapping requires swapping the process between main memory and flash memory in this case. The throughput between Main Memory and Flash memory is poor.

Memory allocation

Memory allocation is an operation whereby programs are allocated space. The primary memory is split into two sections: low and high memory. The OS occupies the low memory while user operations subsist in high memory. Furthermore, memory is divided into distinct divisions, and each procedure is assigned as required. Partition allocation is an exemplary means to mitigate internal fragmentation (Marsico et al., 2017). There are various partition assignment schemes such as first fit, best fit, worst fit, and next fit. In the first fit, the first sufficient block from the prelude of the primary memory is allocated. Best fit assigns the operation to the first smallest subdivision among the available ones. Worst fit assigns the partition operation, which is the main memory’s largest accessible portion. The Next fit is almost indistinguishable from the first fit. However, this fit seeks the first adequate subdivision from the final assignment spot (Marsico et al., 2017). There are mainly three types of memory allocation: Static Memory Allocation, Automatic memory allocation, and Dynamic Memory Allocation. In the buddy system, the received memory request is divided into multiple memory parts that optimized the request as best suitable. The divided memory parts are the best fit allocation which are of the same memory size and hence, act as buddies to each other. These equally divided parts are further divided into two more memory parts of the same size until the main memory request if completed. After this, the memory blocks combine back together. This is called coalescing. A split memory block combines with another memory block only if it is of the same size. Initially, System marks each slab as empty. When a process calls for new Kernel object, initially System tries to find a location on partial slab for the kind of object that is to be created. If no such location exists, the system allocates a new slab from contiguous physical pages and assigns it to a cache. The new object gets allocated from this slab, and its location becomes marked as partial. In the weighted system, an exponential memory block of larger size is split into two components. These are attached to the split memory block as a weighted average in powers of 2^k or 2^(k+1), as required. The weighted factor that is attached is decided based on the number of fragments that the memory block should be split into.

Static Memory Allocation: As the name suggests Static means not changing. In computer programming, to store anything during the execution of the program, we declare some variables to store data. In static memory allocation, the memory is allocated during the start of the program. It means a program has already been told that this is your memory where you can store the data, beyond this you cannot get. Here the memory is limited. In this memory allocation, the memory size cannot be modified which means during the execution of a program if the program needs more memory then that extra memory cannot be allocated.

Automatic memory allocation: The auto-created incentive during the execution of the projects is stored in a memory made during the calculation cycle of the program. It’s a transitory kind of room that exists just the hour of execution of the program. We don’t need to save additional memory to utilize them however it restricted power over the lifetime of this kind of memory since it just exists during the execution of the program. In this memory designation the memory size can’t be adjusted which implies during the execution of a program on the off chance that the program needs more memory, at that point that additional memory can’t be apportioned.

Dynamic Memory Allocation: Here the word control comes into existence. Yes in dynamic memory allocation we can control the size of the memory. We can create the memory for the data during the run time of the program by using many dynamically allocated functions like “malloc()”, “realloc()”, “calloc()” and so on. Here, memory size can be modified during the execution of a program as we do not need to specify the amount of memory during the writing of the program. As we do not need to specify the memory in advance so this kind of memory allocation is very much useful for a real-time application.

Paging

Paging refers to a storage process that authorizes the OS to reclaim operations from the subsidiary storage into the primary memory pages. In this technique, the primary memory is partitioned into minimal-sized frames. The frame size should be similar to that of a page to have uttermost exploitation of the primary memory and mitigate macro fragmentation. However, the OS may still experience internal fragmentation.

Page Map Table

Paging is a logical concept and facilitates faster data access. Paging is straightforward to execute and is regarded as coherent in memory management. Nonetheless, a page table demands supplementary memory and may not be effective for a limited RAM structure.

Memory management functions handle the designation and deallocation of active memory. These functions create an abstraction surface above the conventional C management functions such as free, malloc, and realloc. The user can substitute them with routine code to execute a disparate memory administration scheme (Gandhi et al., 2017. For instance, an immersed structure application may require using a static block from which to assign. The main memory management functions that may be used include rtxMemAlloc, rtxMemFree, and rtxMemReset. These functions are solely used in the creation of C code.

The rtxMemAlloc function allocates a block of memory as malloc would, except that the user’s outlook is that a pointer to context structure is necessitated as an argument. Thus, the assigned memory is monitored within this setting (Memory Management (MEM) Coding: Analysis & Example, 2021). The rtxMemFree function exonerates all memory within a context. The rtxMemReset function re-establishes all memory within a text. The dissimilarity between this and the rtxMemFree function is that this function does not genuinely unbind an existent block of memory (Gandhi et al., 2017. The rtxMemRealloc function reallocates an existent block of memory similar to the C alloc function.

#include <stdio.h>

#include <stdlib.h>

int main()

{

int *ptr, i , n1, n2;

printf(“Enter size: “);

scanf(“%d”, &n1);

ptr = (int*) malloc(n1 * sizeof(int));

printf(“Addresses of previously allocated memory: “);

for(i = 0; i < n1; ++i)

printf(“%un”,ptr + i);

printf(“nEnter the new size: “);

scanf(“%d”, &n2);

// rellocating the memory

ptr = realloc(ptr, n2 * sizeof(int));

printf(“Addresses of newly allocated memory: “);

for(i = 0; i < n2; ++i)

printf(“%un”, ptr + i);

free(ptr);

return 0;

}

#include<stdio.h>

#include<stdlib.h>

int main()

{

int *p, i, n;

printf(“Initial size of the array is 4nn”);

p = (int*)calloc(20, sizeof(int));

if(p==NULL)

{

printf(“Memory allocation failed”);

exit(1); // exit the program

}

for(i = 0; i < 4; i++)

{

printf(“Enter Job ID at index %d: “, i);

scanf(“%d”, p+i);

}

printf(“nIncreasing the size of the array by 5 elements …n “);

p = (int*)realloc(p, 9 * sizeof(int));

if(p==NULL)

{

printf(“Memory allocation failed”);

exit(1); // exit the program

}

printf(“nEnter 5 more Job IDsnn”);

for(i = 4; i < 9; i++)

{

printf(“Enter Job ID at index %d: “, i);

scanf(“%d”, p+i);

}

printf(“nFinal array: nn”);

for(i = 0; i < 9; i++)

{

printf(“%d “, *(p+i) );

}

// signal to operating system program ran fine

return 0;

}

#include <stdio.h>

#include <stdlib.h>

#include <string.h>

int main () {

char *str;

/* Initial memory allocation */

str = (char *) malloc(20);

strcpy(str, “memory address”);

printf(“String = %s, Address = %un”, str, str);

/* Reallocating memory */

str = (char *) realloc(str, 50);

strcat(str, “.com”);

printf(“String = %s, Address = %un”, str, str);

/* Deallocate allocated memory */

return(0);

}

The virtual memory structures in Windows and Linux have a few similar attributes. For instance, both operating systems implement an extensive implementation of the modern processors’ paging structures to facilitate autonomous virtual memory space for individual processors (Yosifovich et al., 2017. However, there is some dissimilarity between these operating systems. For example, in contrast to Linux’s monolithic kernel, Windows has a microkernel architecture, where memory management is implemented. This architecture may negatively impact the execution time of system calls (Yosifovich et al., 2017.

Conclusion

Windows uses working sets as a substitution policy as compared to a global swapping policy implemented in Linux. Working sets are the number of pages a program necessitates in memory to operate efficiently. Therefore, Yosifovich et al. argue that it would be interesting to comprehend which applications’ working sets are drawbacks and scenarios that may be advantageous. Moreover, in Windows, the most suitable way for several processes to apportion memory is via mapped files. Therefore, a physical file must be formed for such a scenario (Yosifovich et al., 2017. A Linux OS implement IPC (Inter-Process Communication) for which there is kernel support. Thus, shared memory is a distinct operation for mapped files.

References

Gandhi, J., Hill, M. D., & Swift, M. M. (2017). U.S. Patent No. 9,619,401. Washington, DC: U.S. Patent and Trademark Office.

Marsico, A., Doriguzzi-Corin, R., & Siracusa, D. (2017, May). An effective swapping mechanism to overcome the memory limitation of SDN devices. In 2017 IFIP/IEEE Symposium on Integrated Network and Service Management (IM) (pp. 247-254). IEEE.

Mishra, D., & Kulkarni, P. (2018). A survey of memory management techniques in virtualized systems. Computer Science Review, 29, 56-73.

Yosifovich, P., Solomon, D. A., & Ionescu, A. (2017). Windows Internals, Part 1: System architecture, processes, threads, memory management, and more. Microsoft Press.

Yousafzai, A., Gani, A., Noor, R. M., Sookhak, M., Talebian, H., Shiraz, M., & Khan, M. K. (2017). Cloud resource allocation schemes: review, taxonomy, and opportunities. Knowledge and Information Systems, 50(2), 347-381.

Study.com. 2021. Memory Management (MEM) Coding: Analysis & Example. [online] Available at: <https://study.com/academy/lesson/memory-management-mem-coding-analysis-example.html> [Accessed 24 January 2021].

Study.com. 2021. Memory Allocation Schemes: Definition & Uses. [online] Available at: <https://study.com/academy/lesson/memory-allocation-schemes-definition-uses.html> [Accessed 24 January 2021].


How The Order Process Works

Amazing Offers from The Uni Tutor
Sign up to our daily deals and don't miss out!

The Uni Tutor Clients