Jo Dec 19, 2024
A large amount of energy is consumed for buildings around the world, and therefore, energy saving is being studied extensively.
Mixing ventilation system (MVS) is one of the mechanical ventilation systems that makes the entire indoor air mixed completely using the jet flow made from blast air. Displacement ventilation system (DVS) is one of the mechanical ventilation systems, which blows air with lower temperature than indoor air into the lower part of a room and causes upward flow of indoor air by buoyant force to exhaust it at the ceiling.
The basic difference between DVS and MVS is that DVS mainly relies on the buoyancy effect, while MVS uses mechanical ventilation force. As DVS is aimed at only meeting the requirements of the workplace, there is a thermal and concentration stratification between the lower and upper parts. DVS is characterized by good air quality, efficient emission of polluted air and significant energy saving effect. In the design of DVS, the height of thermodynamic stratification has to be set to be higher than the height of the workplace, but not too high, because it might cause waste and increase in the amount of air blast. So, the determination of the height of thermodynamic stratification is one of the vital problems arising in the application of DVS. In DVS, ventilation volume for keeping the temperature below the required level is too much when the cooling load is heavy. For DVS, experiments, experiences and engineering design methods have already been presented and mathematical analysis methods have been studied. However, there is little data about DVS for high-ceiling rooms.
Paek Myong Chol, a section head at the Faculty of Heat Engineering, has investigated the temperature change at the workplace when there is heat source in MVS and HDVS with the vertical supply duct by using CFD and presented its effectiveness. In addition, he has verified the existence of the effective length of the vertical supply duct in HDVS with supply ducts from the view of energy saving.
He has found that under the conditions of providing equal temperature and velocity of air inflow, the temperature of the air in the working zone depends on the length of a vertical supply duct, and that the temperature of the air in the working zone increases when the vertical supply duct is longer than the limited value.
...
Jo Dec 17, 2024
In general, in the electromagnetic surveying, frequency dispersion of conductivity is neglected. In contrast, in the common excited polarization method, current is indirectly supplied to the underground, so the secondary voltage originated by ramp turn off in the current flow is actually frequency dispersion of conductivity. Hence, electromagnetic (EM) effect and induced polarization (IP) effect come into being at the same time, which leads to distortion, in other words, sign reversal of EM response. For this reason, many researchers deeply studied the mechanism of the sign reversal and the forward modeling in consideration of this phenomenon.
Many researchers simulated the distorted transient response by conductivity related to frequency. Most of the research results concluded that Cole-Cole model is well suited to considering IP effect observed in TDEM.
Transient response of polarizable whole space is surely different from both polarizable half space response and non-polarizable whole space response.
Jon Kwang Bok, a researcher at the Faculty of Earth Science and Engineering, has approximated the whole space electromagnetic transient field originated by rectangular loop in the whole layered space which includes polarizable layers by using the Cole-Cole model and linear digital filtering method.
As a result, he has drawn two conclusions.
First, induced voltage measured has no relationship with whether the polarizable conductive layers are above or below the measurement spot. Second, polarization must be considered in the whole space inversion as in half space because the response computed by common whole space modeling method is a lot different from that by polarizable whole space modeling method.
...
Jo Dec 16, 2024
Unmanned Surface Vessel (USV) is gaining great popularity in various fields including marine environmental monitoring and coastal waters protection and it is becoming world-focused marine equipment.
USV can reduce direct human intervention and perform safe operations in hazardous environments such as marine exploration, personnel rescue, environmental monitoring and survey, etc.
When the USV is sailing, it will deviate from the given course due to the disturbances of wave, current, wind and other marine environment.
Although several technical proposals have been proposed to solve this problem, ship control still remains a very challenging problem.
Ship course is mostly controlled by the conventional PID control law. Conventional controllers are still regarded as a good choice for an auto-steering system when they have typical SISOs, but not suitable for a complex and strong nonlinear MIMO system like USV. Indeed, this method is easy and simple, but its parameters are set by manual experience and control is possible in the limited range, which leads to low adaptability.
Choe Kum Song, a section head at the Faculty of Shipbuilding and Ocean Engineering, has obtained USV’s hydrodynamic parameters from its sailing test data and built a linear mathematical model of 3-DOF for USV planar motion. On this basis, he has proposed a course control algorithm for USV based on fuzzy PD and compared it with the conventional PID law.
The simulation results show that the designed fuzzy PD law is more effective than the conventional PID law.
...
Jo Dec 13, 2024
Particle size distribution measurement is indispensable for pharmaceutical industry, material manufacturing, food industry, microbiological industry, health-friendly environmental support, etc. In particular, it is very important to visualize dust particle distribution in real time in order to reliably and intuitively ensure the steady-state operation of a dust filtration system for high quality filtration.
Optical measurement techniques with laser light sources of excellent optical properties such as laser diffraction metrology, phase Doppler metrology, photon correlation spectroscopy, laser-induced fluorescence method, light extinction method, etc. have been developed and used in the field of particle size distribution measurement for short measurement time, real-time measurement, remote measurement, etc.
However, above optical dust measurement methods are independent of the visualization of dust particle distribution in the measurement area of interest.
With the recent development of high-resolution, high-sensitivity and high-speed digital cameras which are easy to control automatically, there has been a growing interest in determining particle velocity, size, and shape while visualizing particle distribution using direct imaging of particles. At present, however, the direct imaging method by digital cameras is limited to spherical particles of relatively large size (50㎛).
The size of dust particles that are particularly detrimental to human body is in the range of 0.3~5㎛, so in general cases, such digital camera optical system cannot separate dust particles from the imaging plane.
Ri Chol Man, a researcher at the Faculty of Physical Engineering, has theoretically and experimentally investigated the possibility of magnification of the particle images by introducing intense light irradiation to the imaging system, and based on the results, he has constructed a particle size distribution measurement apparatus and confirmed its usefulness.
The results demonstrate that the proposed method can be used instead of the light scattering particulate counting method, which is widely used in portable applications for measuring dust particles in the size range harmful to human body.
...
Jo Dec 10, 2024
Industry 4.0 is characterized by the unprecedented connection by the Internet of things, Internet of Services, and called cyber-physical system (CPS), which can be considered systems that bring the physical world and the cyber space together.
CPS is defined as engineered systems that are built from and depend upon the synergy of computational and physical components. CPS has attracted a lot of research attention and many CPS-based applications have been built, such as smart healthcare, smart transportation, smart city, cyber-physical vehicle tracking system, etc.
CPSs have two parallel networks to control, namely a physical network of interconnected components of the infrastructure and a cyber-network comprised of intelligent controllers and the communication links among them. CPSs are able to interact with their environment via sensors and actuators. CPSs are expected to enable factories to organize and control themselves autonomously in a decentralized fashion and in real time. These factories are often referred to as smart factories.
The analysis of process history data by the product lifecycle requires new architectures and platforms for dealing with the enormous volume of data of great variation and fast speed. These drive the conventional data ingestion and storage to their limits, so Big Data platforms are needed.
Cloud computing infrastructure can serve as an effective platform for data storage required to perform big data analysis. Cloud computing not only provides facilities for the computation and processing of big data but also serves as a service model.
Kim Ryo Chol, a section head at the Faculty of Information Science and Technology, has proposed a big data aggregation and analysis system model for industrial cyber-physical system and its implementation in cloud computing environment. First, he explored a closed-loop cyber physical system model based on the big data ingestion and analysis system that provides optimization feedback. Second, he proposed an architecture of big data ingestion and analysis system and a vSphere-based private method of cloud environment configuration for its implementation.
He examined the data read performance of the proposed method compared with a traditional database. The experimental results show that the proposed architecture is faster than the one based on MySQL in terms of data processing time.
...
Jo Dec 5, 2024
In recent years, rapid economic development in many countries has resulted in an increase in energy consumption, and hence, a tremendous amount of industrial waste heat is emitted into the atmosphere. The recycling of the waste heat is of great significance in improving the thermal efficiency of systems.
Open cycle absorption heat pump is of great significance in improving the energy efficiency of the system by the latent heat recovery of the exhaust gas. The heat and mass transfer processes between solution and moist air are very complex and many studies have been carried out on them.
The results show that the open cycle AHP is very effective for the latent heat recovery of exhaust gas. As this process depends on the direct contact heat and mass transfer between exhaust gas and solution in the absorber, this process was mostly simulated by numerical analysis by discretizing the heat balance equations for moist air and solution along the flow direction. This method is applicable to the case of uniform distribution of air flow in the absorber. In reality, the velocity is different at each position of the absorber. In this case, the flow distribution inside the absorber should be taken into account together.
Ri Kwang Chol, a researcher at the Faculty of Heat Engineering, has built a mathematical model to describe the heat and mass transfer in the absorber of an open cycle absorption heat pump in consideration of the flow characteristics of air inside, and analyzed it by Ansys Fluent.
The results show that the absorption performance is affected by several parameters among which the solution flux has the greatest effect, and that the temperature and moisture content of outlet air depend on the inlet air velocity whose optimum value in the absorber is 1.5m/s.
...