Lecture-1 PDC
### Distributed Systems
1. **Definition**:
- A *distributed system* is a network of independent computers that work together to appear as a single, unified system. Imagine multiple computers collaborating to act as one "supercomputer."
2. **Key Points**:
- **Autonomous Components**: Each computer (or component) operates independently.
- **Collaboration**: These computers work together to solve problems, sharing data and processing tasks.
- **Diversity**: Distributed systems don’t require all computers to be the same; they can range from powerful servers to tiny devices.
- **Consistency for Users**: The system hides complexity, making it look like a single system to users.
- **Expandability**: Distributed systems are designed to scale up, which means new computers or resources can be added easily.
3. **Transparency and Reliability**:
- **Transparency**: The system hides where data or resources are actually located across the network.
- **Reliability**: If one part of the system fails, others can take over, so the system usually remains available.
4. **Middleware**:
- Middleware is the software "glue" that connects various computers, applications, or networks, allowing them to communicate and work together. It sits between the operating system and applications, making it easier to link diverse platforms.
---
### Characteristics of Distributed Systems
1. **Accessibility**:
- Distributed systems aim to make resources (files, applications, etc.) easily accessible across the network, enabling easy collaboration and data sharing.
2. **Security**:
- Because of the openness, security is crucial. Distributed systems must protect against unauthorized access or eavesdropping during communication.
3. **Openness**:
- These systems are "open" in the sense that they follow standard protocols for communication. These protocols are rules that specify how messages are formatted and understood.
4. **Scalability**:
- *Scalability* is the system's ability to grow. There are three main ways a distributed system can be scalable:
- **Size**: Easily add more resources or users.
- **Geographic**: Handle users and resources far apart.
- **Administrative**: Manage even if the system spans multiple administrative zones.
---
### Types of Distributed Systems
1. **Client/Server Model**:
- A client requests resources or services from a server. The server responds with the requested resources, such as a web server providing web pages.
2. **Peer-to-Peer (P2P)**:
- Each computer (peer) is equal, directly sharing resources (like files) with other peers without a central server. Examples include file-sharing networks like BitTorrent.
3. **Three-tier and N-tier**:
- *Three-tier* systems split into three layers: Presentation (user interface), Application (logic), and Data (storage). N-tier systems are more advanced versions with multiple layers.
4. **Clustered Systems**:
- Multiple interconnected computers handle high workloads together. If one computer fails, others can continue the work.
5. **Grid Computing**:
- Uses resources from various locations to solve large computational tasks. Often used for scientific research.
6. **Cloud Computing**:
- Provides on-demand access to resources over the internet, including storage, applications, and processing power. Examples include Google Cloud and Amazon Web Services.
7. **Distributed Databases**:
- Data is spread across multiple servers. NoSQL databases like Cassandra and MongoDB are examples that support massive amounts of data.
8. **Decentralized Systems**:
- There’s no central authority or control point. Blockchain networks are an example.
9. **Sensor Networks**:
- Many small sensors collect and transmit data, like IoT devices monitoring temperature or air quality.
---
### Synchronous vs. Asynchronous Computation/Communication
1. **Synchronous**:
- **Sequential Execution**: Tasks run one after the other. A task waits until the previous one finishes before starting.
- **Blocking**: If a task takes time, the whole program waits until it’s done, which can make things slow.
- **Example**: If a program reads data from a file, it waits until the file reading is complete before moving to the next task.
2. **Asynchronous**:
- **Concurrent Execution**: Tasks run independently, without waiting for each other.
- **Non-Blocking**: The program doesn’t have to pause for a task to finish; it can move on to other tasks and come back to complete it later.
- **Example**: In a web app, when multiple requests are sent to the server simultaneously, the app can keep running and display results as they come in.
---
### Story Recap
Imagine *Cyberspaceia*, a futuristic kingdom with *Computronia*, the super-smart city with a big central brain and smaller brains around it. When a big storm hit, a single brain couldn’t handle all the work. But by using distributed computing, all the smaller brains, including a clever one named Pixel, shared the work. Together, they accurately predicted the storm’s path, saving Cyberspaceia’s people. This showed that multiple small brains working together are more powerful than one big brain alone.
---
This covers the essentials. Review each section, and let me know if you need any more explanation on specific areas.
Comments
Post a Comment